Test Report: Docker_macOS 17488

                    
                      292152b7ba2fff47063f7712cda18987a57d80fb:2023-10-25:31605
                    
                

Test fail (22/321)

x
+
TestDownloadOnly/v1.28.3/json-events (7.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-018000 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-018000 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=docker : exit status 40 (7.106865389s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"911943f3-5e77-4a48-aa84-c987d338383a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-018000] minikube v1.31.2 on Darwin 14.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c07ec910-38b8-4a3e-8a1b-936dc185c00f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17488"}}
	{"specversion":"1.0","id":"24e45c4a-3960-4f22-b547-101abc119497","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig"}}
	{"specversion":"1.0","id":"44d888d4-fe6b-40dc-9a11-014cd923dadc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"7c420702-269f-4a7a-8675-0691a967c274","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9401db77-0f9e-454b-bb1b-48d4f94bde73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube"}}
	{"specversion":"1.0","id":"97ad0f76-efbe-4b05-9a45-f58b6ef4b6bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"93c2b2b9-ca60-471b-bc1b-77391b24879c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on existing profile","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6752ff16-1fe1-4732-b9d2-4bf73b575433","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-018000 in cluster download-only-018000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb825637-437b-469b-b2cc-6cc0e961f5dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed611a6c-ab84-49e9-94db-caa6c3f2f27c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Downloading Kubernetes v1.28.3 preload ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"109fb413-a13a-4f00-9229-5518a6396622","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520] Decompressors:map[bz2:0xc000721b50 gz:0xc000721b58 tar:0xc000721b00 tar.bz2:0xc000721b10 tar.gz:0xc000721b20 tar.xz:0xc000721b30 tar.zst:0xc000721b40 tbz2:0xc000721b10 tgz:0xc000721b20 txz:0xc00072
1b30 tzst:0xc000721b40 xz:0xc000721b60 zip:0xc000721b70 zst:0xc000721b68] Getters:map[file:0xc00078e6a0 http:0xc000c25270 https:0xc000c252c0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"fe447bce-ec3b-44f4-a29a-1de7e4d9e4c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 17:38:55.600100   65328 out.go:296] Setting OutFile to fd 1 ...
	I1025 17:38:55.600370   65328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:38:55.600375   65328 out.go:309] Setting ErrFile to fd 2...
	I1025 17:38:55.600379   65328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:38:55.600552   65328 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	W1025 17:38:55.600644   65328 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17488-64832/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17488-64832/.minikube/config/config.json: no such file or directory
	I1025 17:38:55.601972   65328 out.go:303] Setting JSON to true
	I1025 17:38:55.624815   65328 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":31103,"bootTime":1698249632,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 17:38:55.624916   65328 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 17:38:55.646456   65328 out.go:97] [download-only-018000] minikube v1.31.2 on Darwin 14.0
	I1025 17:38:55.668931   65328 out.go:169] MINIKUBE_LOCATION=17488
	I1025 17:38:55.646655   65328 notify.go:220] Checking for updates...
	I1025 17:38:55.690280   65328 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:38:55.712335   65328 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 17:38:55.734009   65328 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:38:55.755067   65328 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	W1025 17:38:55.797273   65328 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 17:38:55.798002   65328 config.go:182] Loaded profile config "download-only-018000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1025 17:38:55.798089   65328 start.go:810] api.Load failed for download-only-018000: filestore "download-only-018000": Docker machine "download-only-018000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 17:38:55.798255   65328 driver.go:378] Setting default libvirt URI to qemu:///system
	W1025 17:38:55.798297   65328 start.go:810] api.Load failed for download-only-018000: filestore "download-only-018000": Docker machine "download-only-018000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 17:38:55.857430   65328 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 17:38:55.857548   65328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:38:55.959655   65328 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2023-10-26 00:38:55.945696378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:38:55.980948   65328 out.go:97] Using the docker driver based on existing profile
	I1025 17:38:55.980985   65328 start.go:298] selected driver: docker
	I1025 17:38:55.980996   65328 start.go:902] validating driver "docker" against &{Name:download-only-018000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-018000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:38:55.981315   65328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:38:56.083537   65328 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2023-10-26 00:38:56.070643682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:38:56.086776   65328 cni.go:84] Creating CNI manager for ""
	I1025 17:38:56.086801   65328 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 17:38:56.086815   65328 start_flags.go:323] config:
	{Name:download-only-018000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-018000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:38:56.108319   65328 out.go:97] Starting control plane node download-only-018000 in cluster download-only-018000
	I1025 17:38:56.108357   65328 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 17:38:56.129187   65328 out.go:97] Pulling base image ...
	I1025 17:38:56.129260   65328 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 17:38:56.129369   65328 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 17:38:56.179981   65328 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 17:38:56.180182   65328 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1025 17:38:56.180209   65328 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1025 17:38:56.180217   65328 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1025 17:38:56.180231   65328 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1025 17:38:56.185050   65328 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 17:38:56.185061   65328 cache.go:56] Caching tarball of preloaded images
	I1025 17:38:56.186172   65328 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 17:38:56.207452   65328 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1025 17:38:56.207478   65328 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1025 17:38:56.287135   65328 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4?checksum=md5:82104bbf889ff8b69d5c141ce86c05ac -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 17:39:01.450470   65328 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1025 17:39:01.450670   65328 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1025 17:39:02.075206   65328 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 17:39:02.075302   65328 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/download-only-018000/config.json ...
	I1025 17:39:02.075734   65328 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 17:39:02.076477   65328 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl
	I1025 17:39:02.544803   65328 out.go:169] 
	W1025 17:39:02.566960   65328 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520] Decompressors:map[bz2:0xc000721b50 gz:0xc000721b58 tar:0xc000721b00 tar.bz2:0xc000721b10 tar.gz:0xc000721b20 tar.xz:0xc000721b30 tar.zst:0xc000721b40 tbz2:0xc000721b10 tgz:0xc000721b20 txz:0xc000721b30 tzst:0xc000721b40 xz:0xc000721b60 zip:0xc000721b70 zst:0xc000721b68] Getters:map[file:0xc00078e6a0 http:0xc000c25270 https:0xc000c252c0] Dir:false ProgressListener:<nil> Insecure:false Disa
bleSymlinks:false Options:[]}: bad response code: 404
	W1025 17:39:02.566983   65328 out_reason.go:110] 
	W1025 17:39:02.590787   65328 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 17:39:02.612805   65328 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-018000" "--force" "--alsologtostderr" "--kubernetes-version=v1.28.3" "--container-runtime=docker" "--driver=docker" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.28.3/json-events (7.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:163: expected the file for binary exist at "/Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl" but got error stat /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.47s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-923000 --alsologtostderr --driver=docker 
aaa_download_only_test.go:225: (dbg) Non-zero exit: out/minikube-darwin-amd64 start --download-only -p download-docker-923000 --alsologtostderr --driver=docker : exit status 40 (1.360016195s)

                                                
                                                
-- stdout --
	* [download-docker-923000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node download-docker-923000 in cluster download-docker-923000
	* Pulling base image ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 17:39:04.567568   65412 out.go:296] Setting OutFile to fd 1 ...
	I1025 17:39:04.567855   65412 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:39:04.567860   65412 out.go:309] Setting ErrFile to fd 2...
	I1025 17:39:04.567864   65412 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:39:04.568045   65412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 17:39:04.569489   65412 out.go:303] Setting JSON to false
	I1025 17:39:04.592909   65412 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":31112,"bootTime":1698249632,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 17:39:04.593022   65412 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 17:39:04.614651   65412 out.go:177] * [download-docker-923000] minikube v1.31.2 on Darwin 14.0
	I1025 17:39:04.656857   65412 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 17:39:04.656970   65412 notify.go:220] Checking for updates...
	I1025 17:39:04.699526   65412 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:39:04.720452   65412 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 17:39:04.741590   65412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:39:04.762607   65412 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 17:39:04.784104   65412 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 17:39:04.845120   65412 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 17:39:04.845256   65412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:39:04.950197   65412 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2023-10-26 00:39:04.936663241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:39:04.994077   65412 out.go:177] * Using the docker driver based on user configuration
	I1025 17:39:05.016186   65412 start.go:298] selected driver: docker
	I1025 17:39:05.016211   65412 start.go:902] validating driver "docker" against <nil>
	I1025 17:39:05.016418   65412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:39:05.121506   65412 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2023-10-26 00:39:05.109596438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:39:05.121697   65412 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 17:39:05.124588   65412 start_flags.go:386] Using suggested 5891MB memory alloc based on sys=32768MB, container=5939MB
	I1025 17:39:05.124726   65412 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 17:39:05.147948   65412 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 17:39:05.168228   65412 cni.go:84] Creating CNI manager for ""
	I1025 17:39:05.168274   65412 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 17:39:05.168294   65412 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 17:39:05.168321   65412 start_flags.go:323] config:
	{Name:download-docker-923000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-docker-923000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:39:05.216028   65412 out.go:177] * Starting control plane node download-docker-923000 in cluster download-docker-923000
	I1025 17:39:05.237086   65412 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 17:39:05.279295   65412 out.go:177] * Pulling base image ...
	I1025 17:39:05.301162   65412 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 17:39:05.301215   65412 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 17:39:05.301250   65412 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 17:39:05.301271   65412 cache.go:56] Caching tarball of preloaded images
	I1025 17:39:05.301482   65412 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 17:39:05.301513   65412 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 17:39:05.303091   65412 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/download-docker-923000/config.json ...
	I1025 17:39:05.303190   65412 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/download-docker-923000/config.json: {Name:mk3769acee7b0741889201d3563d4c29fbd61b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:39:05.304060   65412 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 17:39:05.304396   65412 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl
	I1025 17:39:05.353638   65412 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 17:39:05.353754   65412 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1025 17:39:05.353772   65412 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1025 17:39:05.353777   65412 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1025 17:39:05.353786   65412 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1025 17:39:05.744937   65412 out.go:177] 
	W1025 17:39:05.765700   65412 out.go:239] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520] Decompressors:map[bz2:0xc000a0ce80 gz:0xc000a0ce88 tar:0xc000a0ce30 tar.bz2:0xc000a0ce40 tar.gz:0xc000a0ce50 tar.xz:0xc000a0ce60 tar.zst:0xc000a0ce70 tbz2:0xc000a0ce40 tgz:0xc000a0ce50 txz:0xc000a0ce60 tzst:0xc000a0ce70 xz:0xc000a0ce90 zip:0xc000a0cea0 zst:0xc000a0ce98] Getters:map[file:0xc002203ec0 http:0xc00063cdc0 https:0xc00063ce10] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: bad response code: 404
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520] Decompressors:map[bz2:0xc000a0ce80 gz:0xc000a0ce88 tar:0xc000a0ce30 tar.bz2:0xc000a0ce40 tar.gz:0xc000a0ce50 tar.xz:0xc000a0ce60 tar.zst:0xc000a0ce70 tbz2:0xc000a0ce40 tgz:0xc000a0ce50 txz:0xc000a0ce60 tzst:0xc000a0ce70 xz:0xc000a0ce90 zip:0xc000a0cea0 zst:0xc000a0ce98] Getters:map[file:0xc002203ec0 http:0xc00063cdc0 https:0xc00063ce10] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:
false Options:[]}: bad response code: 404
	W1025 17:39:05.765762   65412 out.go:239] * 
	* 
	W1025 17:39:05.766699   65412 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 17:39:05.829679   65412 out.go:177] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:226: start with download only failed ["start" "--download-only" "-p" "download-docker-923000" "--alsologtostderr" "--driver=docker" ""] : exit status 40
helpers_test.go:175: Cleaning up "download-docker-923000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-923000
--- FAIL: TestDownloadOnlyKic (2.47s)

                                                
                                    
x
+
TestBinaryMirror (1.8s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:288: Failed to download binary: bad response code: 404
getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x52acd40 0x52acd40 0x52acd40 0x52acd40 0x52acd40 0x52acd40 0x52acd40] Decompressors:map[bz2:0xc00062dc30 gz:0xc00062dc38 tar:0xc00062dbe0 tar.bz2:0xc00062dbf0 tar.gz:0xc00062dc00 tar.xz:0xc00062dc10 tar.zst:0xc00062dc20 tbz2:0xc00062dbf0 tgz:0xc00062dc00 txz:0xc00062dc10 tzst:0xc00062dc20 xz:0xc00062dc40 zip:0xc00062dc50 zst:0xc00062dc48] Getters:map[file:0xc0006e0990 http:0xc000672780 https:0xc0006727d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}
k8s.io/minikube/pkg/minikube/download.download
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/minikube/download/download.go:109
k8s.io/minikube/pkg/minikube/download.Binary
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/minikube/download/binary.go:80
k8s.io/minikube/test/integration.TestBinaryMirror
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/aaa_download_only_test.go:286
testing.tRunner
	/usr/local/go/src/testing/testing.go:1595
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1650
download failed: https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256
k8s.io/minikube/pkg/minikube/download.Binary
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/minikube/download/binary.go:81
k8s.io/minikube/test/integration.TestBinaryMirror
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/aaa_download_only_test.go:286
testing.tRunner
	/usr/local/go/src/testing/testing.go:1595
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1650
aaa_download_only_test.go:298: Failed to move binary file: rename  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestBinaryMirror3349905718/001/v1.28.3/bin/darwin/amd64/kubectl: no such file or directory
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-276000 --alsologtostderr --binary-mirror http://127.0.0.1:55590 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-276000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-276000
--- FAIL: TestBinaryMirror (1.80s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 kubectl -- --context functional-188000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-188000 kubectl -- --context functional-188000 get pods: exit status 1 (60.177063ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-amd64 -p functional-188000 kubectl -- --context functional-188000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-188000
helpers_test.go:235: (dbg) docker inspect functional-188000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a0c213dc3ac2433b9ac003938903e568bd3d28dbde6fefb7f904b5a6a1df3bfb",
	        "Created": "2023-10-26T00:45:03.536217576Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29864,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T00:45:03.759078925Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/a0c213dc3ac2433b9ac003938903e568bd3d28dbde6fefb7f904b5a6a1df3bfb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a0c213dc3ac2433b9ac003938903e568bd3d28dbde6fefb7f904b5a6a1df3bfb/hostname",
	        "HostsPath": "/var/lib/docker/containers/a0c213dc3ac2433b9ac003938903e568bd3d28dbde6fefb7f904b5a6a1df3bfb/hosts",
	        "LogPath": "/var/lib/docker/containers/a0c213dc3ac2433b9ac003938903e568bd3d28dbde6fefb7f904b5a6a1df3bfb/a0c213dc3ac2433b9ac003938903e568bd3d28dbde6fefb7f904b5a6a1df3bfb-json.log",
	        "Name": "/functional-188000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-188000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-188000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c353ed8215fac2e882031b322e1aef62fbefadc60c1c795e5167fdca1713513b-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c353ed8215fac2e882031b322e1aef62fbefadc60c1c795e5167fdca1713513b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c353ed8215fac2e882031b322e1aef62fbefadc60c1c795e5167fdca1713513b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c353ed8215fac2e882031b322e1aef62fbefadc60c1c795e5167fdca1713513b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-188000",
	                "Source": "/var/lib/docker/volumes/functional-188000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-188000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-188000",
	                "name.minikube.sigs.k8s.io": "functional-188000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f41c0f00f47c85ccab259f3c9185c3fd8f888b614d21172aa6d7b42253a9d297",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56240"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56241"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56242"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56238"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56239"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f41c0f00f47c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-188000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a0c213dc3ac2",
	                        "functional-188000"
	                    ],
	                    "NetworkID": "9c6584acd3f5f010c10228aadf5881262279d8de66e3b7ef13f7639377f1b7ba",
	                    "EndpointID": "0df72d627619a30b77bdd3ae45493e740e172f3b4f0d497c5be486ae53327208",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-188000 -n functional-188000
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 logs -n 25: (2.963851535s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                              Args                              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| pause   | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | pause                                                          |                   |         |         |                     |                     |
	| unpause | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| unpause | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| unpause | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| stop    | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| stop    | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| stop    | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| delete  | -p nospam-797000                                               | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	| start   | -p functional-188000                                           | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:45 PDT |
	|         | --memory=4000                                                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                          |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                     |                   |         |         |                     |                     |
	| start   | -p functional-188000                                           | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:45 PDT | 25 Oct 23 17:46 PDT |
	|         | --alsologtostderr -v=8                                         |                   |         |         |                     |                     |
	| cache   | functional-188000 cache add                                    | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | registry.k8s.io/pause:3.1                                      |                   |         |         |                     |                     |
	| cache   | functional-188000 cache add                                    | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | registry.k8s.io/pause:3.3                                      |                   |         |         |                     |                     |
	| cache   | functional-188000 cache add                                    | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | functional-188000 cache add                                    | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | minikube-local-cache-test:functional-188000                    |                   |         |         |                     |                     |
	| cache   | functional-188000 cache delete                                 | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | minikube-local-cache-test:functional-188000                    |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | registry.k8s.io/pause:3.3                                      |                   |         |         |                     |                     |
	| cache   | list                                                           | minikube          | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	| ssh     | functional-188000 ssh sudo                                     | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | crictl images                                                  |                   |         |         |                     |                     |
	| ssh     | functional-188000                                              | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | ssh sudo docker rmi                                            |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| ssh     | functional-188000 ssh                                          | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT |                     |
	|         | sudo crictl inspecti                                           |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | functional-188000 cache reload                                 | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	| ssh     | functional-188000 ssh                                          | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | sudo crictl inspecti                                           |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | registry.k8s.io/pause:3.1                                      |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| kubectl | functional-188000 kubectl --                                   | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT |                     |
	|         | --context functional-188000                                    |                   |         |         |                     |                     |
	|         | get pods                                                       |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 17:45:37
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 17:45:37.083097   66547 out.go:296] Setting OutFile to fd 1 ...
	I1025 17:45:37.083397   66547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:45:37.083403   66547 out.go:309] Setting ErrFile to fd 2...
	I1025 17:45:37.083407   66547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:45:37.083612   66547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 17:45:37.085071   66547 out.go:303] Setting JSON to false
	I1025 17:45:37.106937   66547 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":31505,"bootTime":1698249632,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 17:45:37.107078   66547 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 17:45:37.128825   66547 out.go:177] * [functional-188000] minikube v1.31.2 on Darwin 14.0
	I1025 17:45:37.172354   66547 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 17:45:37.172489   66547 notify.go:220] Checking for updates...
	I1025 17:45:37.216499   66547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:45:37.238233   66547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 17:45:37.259405   66547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:45:37.280433   66547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 17:45:37.301239   66547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 17:45:37.323070   66547 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 17:45:37.323230   66547 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 17:45:37.380875   66547 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 17:45:37.381029   66547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:45:37.486222   66547 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2023-10-26 00:45:37.475652597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:45:37.528136   66547 out.go:177] * Using the docker driver based on existing profile
	I1025 17:45:37.549294   66547 start.go:298] selected driver: docker
	I1025 17:45:37.549311   66547 start.go:902] validating driver "docker" against &{Name:functional-188000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-188000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:45:37.549389   66547 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 17:45:37.549530   66547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:45:37.654871   66547 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2023-10-26 00:45:37.643015238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:45:37.658120   66547 cni.go:84] Creating CNI manager for ""
	I1025 17:45:37.658148   66547 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 17:45:37.658164   66547 start_flags.go:323] config:
	{Name:functional-188000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-188000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:45:37.701410   66547 out.go:177] * Starting control plane node functional-188000 in cluster functional-188000
	I1025 17:45:37.722572   66547 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 17:45:37.744136   66547 out.go:177] * Pulling base image ...
	I1025 17:45:37.786325   66547 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 17:45:37.786376   66547 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 17:45:37.786391   66547 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 17:45:37.786409   66547 cache.go:56] Caching tarball of preloaded images
	I1025 17:45:37.786592   66547 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 17:45:37.786614   66547 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 17:45:37.786761   66547 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/config.json ...
	I1025 17:45:37.838933   66547 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 17:45:37.838964   66547 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 17:45:37.838985   66547 cache.go:194] Successfully downloaded all kic artifacts
	I1025 17:45:37.839031   66547 start.go:365] acquiring machines lock for functional-188000: {Name:mk049bc040d714cb261ebd3cb2ab3e83ad65175f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 17:45:37.839111   66547 start.go:369] acquired machines lock for "functional-188000" in 60.988µs
	I1025 17:45:37.839133   66547 start.go:96] Skipping create...Using existing machine configuration
	I1025 17:45:37.839143   66547 fix.go:54] fixHost starting: 
	I1025 17:45:37.839392   66547 cli_runner.go:164] Run: docker container inspect functional-188000 --format={{.State.Status}}
	I1025 17:45:37.890210   66547 fix.go:102] recreateIfNeeded on functional-188000: state=Running err=<nil>
	W1025 17:45:37.890241   66547 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 17:45:37.933685   66547 out.go:177] * Updating the running docker "functional-188000" container ...
	I1025 17:45:37.954834   66547 machine.go:88] provisioning docker machine ...
	I1025 17:45:37.954890   66547 ubuntu.go:169] provisioning hostname "functional-188000"
	I1025 17:45:37.955095   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:38.007234   66547 main.go:141] libmachine: Using SSH client type: native
	I1025 17:45:38.007576   66547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56240 <nil> <nil>}
	I1025 17:45:38.007590   66547 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-188000 && echo "functional-188000" | sudo tee /etc/hostname
	I1025 17:45:38.139776   66547 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-188000
	
	I1025 17:45:38.139871   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:38.191354   66547 main.go:141] libmachine: Using SSH client type: native
	I1025 17:45:38.191648   66547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56240 <nil> <nil>}
	I1025 17:45:38.191662   66547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-188000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-188000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-188000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 17:45:38.313799   66547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 17:45:38.313820   66547 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 17:45:38.313844   66547 ubuntu.go:177] setting up certificates
	I1025 17:45:38.313855   66547 provision.go:83] configureAuth start
	I1025 17:45:38.313936   66547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-188000
	I1025 17:45:38.364776   66547 provision.go:138] copyHostCerts
	I1025 17:45:38.364836   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 17:45:38.364892   66547 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 17:45:38.364902   66547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 17:45:38.365008   66547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 17:45:38.365211   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 17:45:38.365238   66547 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 17:45:38.365242   66547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 17:45:38.365307   66547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 17:45:38.365467   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 17:45:38.365509   66547 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 17:45:38.365513   66547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 17:45:38.365571   66547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 17:45:38.365709   66547 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.functional-188000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-188000]
	I1025 17:45:38.525621   66547 provision.go:172] copyRemoteCerts
	I1025 17:45:38.525682   66547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 17:45:38.525747   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:38.577340   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:45:38.665474   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 17:45:38.665544   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 17:45:38.687976   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 17:45:38.688036   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1025 17:45:38.710086   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 17:45:38.710170   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 17:45:38.733390   66547 provision.go:86] duration metric: configureAuth took 419.508117ms
	I1025 17:45:38.733404   66547 ubuntu.go:193] setting minikube options for container-runtime
	I1025 17:45:38.733544   66547 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 17:45:38.733620   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:38.785970   66547 main.go:141] libmachine: Using SSH client type: native
	I1025 17:45:38.786249   66547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56240 <nil> <nil>}
	I1025 17:45:38.786259   66547 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 17:45:38.909272   66547 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 17:45:38.909285   66547 ubuntu.go:71] root file system type: overlay
	I1025 17:45:38.909388   66547 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 17:45:38.909477   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:38.960504   66547 main.go:141] libmachine: Using SSH client type: native
	I1025 17:45:38.960822   66547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56240 <nil> <nil>}
	I1025 17:45:38.960875   66547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 17:45:39.094927   66547 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 17:45:39.095034   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:39.146810   66547 main.go:141] libmachine: Using SSH client type: native
	I1025 17:45:39.147098   66547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56240 <nil> <nil>}
	I1025 17:45:39.147114   66547 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 17:45:39.275394   66547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 17:45:39.275411   66547 machine.go:91] provisioned docker machine in 1.320517407s
	I1025 17:45:39.275417   66547 start.go:300] post-start starting for "functional-188000" (driver="docker")
	I1025 17:45:39.275429   66547 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 17:45:39.275513   66547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 17:45:39.275568   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:39.327545   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:45:39.418084   66547 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 17:45:39.422415   66547 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1025 17:45:39.422424   66547 command_runner.go:130] > NAME="Ubuntu"
	I1025 17:45:39.422428   66547 command_runner.go:130] > VERSION_ID="22.04"
	I1025 17:45:39.422437   66547 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1025 17:45:39.422443   66547 command_runner.go:130] > VERSION_CODENAME=jammy
	I1025 17:45:39.422446   66547 command_runner.go:130] > ID=ubuntu
	I1025 17:45:39.422450   66547 command_runner.go:130] > ID_LIKE=debian
	I1025 17:45:39.422454   66547 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1025 17:45:39.422459   66547 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1025 17:45:39.422468   66547 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1025 17:45:39.422475   66547 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1025 17:45:39.422479   66547 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1025 17:45:39.422525   66547 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 17:45:39.422543   66547 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 17:45:39.422550   66547 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 17:45:39.422563   66547 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 17:45:39.422572   66547 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 17:45:39.422663   66547 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 17:45:39.422807   66547 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 17:45:39.422815   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /etc/ssl/certs/652922.pem
	I1025 17:45:39.422963   66547 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/test/nested/copy/65292/hosts -> hosts in /etc/test/nested/copy/65292
	I1025 17:45:39.422969   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/test/nested/copy/65292/hosts -> /etc/test/nested/copy/65292/hosts
	I1025 17:45:39.423011   66547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/65292
	I1025 17:45:39.432101   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 17:45:39.454931   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/test/nested/copy/65292/hosts --> /etc/test/nested/copy/65292/hosts (40 bytes)
	I1025 17:45:39.478218   66547 start.go:303] post-start completed in 202.785073ms
	I1025 17:45:39.478295   66547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 17:45:39.478363   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:39.529399   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:45:39.616302   66547 command_runner.go:130] > 6%!
	(MISSING)I1025 17:45:39.616374   66547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 17:45:39.621796   66547 command_runner.go:130] > 92G
	I1025 17:45:39.622081   66547 fix.go:56] fixHost completed within 1.782885312s
	I1025 17:45:39.622096   66547 start.go:83] releasing machines lock for "functional-188000", held for 1.782924172s
	I1025 17:45:39.622178   66547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-188000
	I1025 17:45:39.674301   66547 ssh_runner.go:195] Run: cat /version.json
	I1025 17:45:39.674307   66547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 17:45:39.674382   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:39.674382   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:39.731160   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:45:39.731332   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:45:39.923267   66547 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1025 17:45:39.925517   66547 command_runner.go:130] > {"iso_version": "v1.31.0-1697471113-17434", "kicbase_version": "v0.0.40-1698055645-17423", "minikube_version": "v1.31.2", "commit": "585245745aba695f9444ad633713942a6eacd882"}
	I1025 17:45:39.925667   66547 ssh_runner.go:195] Run: systemctl --version
	I1025 17:45:39.930728   66547 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1025 17:45:39.930757   66547 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1025 17:45:39.931003   66547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 17:45:39.937006   66547 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1025 17:45:39.937030   66547 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I1025 17:45:39.937042   66547 command_runner.go:130] > Device: a4h/164d	Inode: 1066465     Links: 1
	I1025 17:45:39.937053   66547 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 17:45:39.937062   66547 command_runner.go:130] > Access: 2023-10-26 00:45:07.468112584 +0000
	I1025 17:45:39.937067   66547 command_runner.go:130] > Modify: 2023-10-26 00:45:07.442112582 +0000
	I1025 17:45:39.937071   66547 command_runner.go:130] > Change: 2023-10-26 00:45:07.442112582 +0000
	I1025 17:45:39.937076   66547 command_runner.go:130] >  Birth: 2023-10-26 00:45:07.442112582 +0000
	I1025 17:45:39.937254   66547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 17:45:39.957372   66547 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 17:45:39.957452   66547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 17:45:39.966943   66547 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 17:45:39.966955   66547 start.go:472] detecting cgroup driver to use...
	I1025 17:45:39.966970   66547 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 17:45:39.967079   66547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 17:45:39.983952   66547 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1025 17:45:39.984034   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 17:45:39.994911   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 17:45:40.005499   66547 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 17:45:40.005559   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 17:45:40.016325   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 17:45:40.026997   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 17:45:40.037495   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 17:45:40.048000   66547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 17:45:40.058293   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 17:45:40.069102   66547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 17:45:40.077883   66547 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1025 17:45:40.078751   66547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 17:45:40.087876   66547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 17:45:40.167094   66547 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 17:45:50.369793   66547 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.202347018s)
	I1025 17:45:50.369810   66547 start.go:472] detecting cgroup driver to use...
	I1025 17:45:50.369822   66547 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 17:45:50.369881   66547 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 17:45:50.390906   66547 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1025 17:45:50.391090   66547 command_runner.go:130] > [Unit]
	I1025 17:45:50.391103   66547 command_runner.go:130] > Description=Docker Application Container Engine
	I1025 17:45:50.391109   66547 command_runner.go:130] > Documentation=https://docs.docker.com
	I1025 17:45:50.391113   66547 command_runner.go:130] > BindsTo=containerd.service
	I1025 17:45:50.391120   66547 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1025 17:45:50.391124   66547 command_runner.go:130] > Wants=network-online.target
	I1025 17:45:50.391134   66547 command_runner.go:130] > Requires=docker.socket
	I1025 17:45:50.391140   66547 command_runner.go:130] > StartLimitBurst=3
	I1025 17:45:50.391145   66547 command_runner.go:130] > StartLimitIntervalSec=60
	I1025 17:45:50.391150   66547 command_runner.go:130] > [Service]
	I1025 17:45:50.391153   66547 command_runner.go:130] > Type=notify
	I1025 17:45:50.391157   66547 command_runner.go:130] > Restart=on-failure
	I1025 17:45:50.391163   66547 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1025 17:45:50.391188   66547 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1025 17:45:50.391200   66547 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1025 17:45:50.391215   66547 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1025 17:45:50.391229   66547 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1025 17:45:50.391242   66547 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1025 17:45:50.391248   66547 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1025 17:45:50.391259   66547 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1025 17:45:50.391266   66547 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1025 17:45:50.391269   66547 command_runner.go:130] > ExecStart=
	I1025 17:45:50.391280   66547 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1025 17:45:50.391286   66547 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1025 17:45:50.391292   66547 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1025 17:45:50.391299   66547 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1025 17:45:50.391307   66547 command_runner.go:130] > LimitNOFILE=infinity
	I1025 17:45:50.391310   66547 command_runner.go:130] > LimitNPROC=infinity
	I1025 17:45:50.391314   66547 command_runner.go:130] > LimitCORE=infinity
	I1025 17:45:50.391333   66547 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1025 17:45:50.391341   66547 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1025 17:45:50.391345   66547 command_runner.go:130] > TasksMax=infinity
	I1025 17:45:50.391348   66547 command_runner.go:130] > TimeoutStartSec=0
	I1025 17:45:50.391357   66547 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1025 17:45:50.391370   66547 command_runner.go:130] > Delegate=yes
	I1025 17:45:50.391385   66547 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1025 17:45:50.391399   66547 command_runner.go:130] > KillMode=process
	I1025 17:45:50.391416   66547 command_runner.go:130] > [Install]
	I1025 17:45:50.391423   66547 command_runner.go:130] > WantedBy=multi-user.target
	I1025 17:45:50.392381   66547 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 17:45:50.392455   66547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 17:45:50.405155   66547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 17:45:50.422541   66547 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 17:45:50.423432   66547 ssh_runner.go:195] Run: which cri-dockerd
	I1025 17:45:50.428150   66547 command_runner.go:130] > /usr/bin/cri-dockerd
	I1025 17:45:50.428262   66547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 17:45:50.438351   66547 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 17:45:50.459440   66547 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 17:45:50.562753   66547 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 17:45:50.662527   66547 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 17:45:50.662614   66547 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 17:45:50.680668   66547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 17:45:50.770208   66547 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 17:45:51.066885   66547 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 17:45:51.153020   66547 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 17:45:51.213077   66547 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 17:45:51.277059   66547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 17:45:51.341504   66547 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 17:45:51.374716   66547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 17:45:51.448574   66547 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 17:45:51.549016   66547 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 17:45:51.549109   66547 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 17:45:51.554558   66547 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1025 17:45:51.554572   66547 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1025 17:45:51.554577   66547 command_runner.go:130] > Device: ach/172d	Inode: 667         Links: 1
	I1025 17:45:51.554583   66547 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1025 17:45:51.554593   66547 command_runner.go:130] > Access: 2023-10-26 00:45:51.462295391 +0000
	I1025 17:45:51.554598   66547 command_runner.go:130] > Modify: 2023-10-26 00:45:51.462295391 +0000
	I1025 17:45:51.554615   66547 command_runner.go:130] > Change: 2023-10-26 00:45:51.483295392 +0000
	I1025 17:45:51.554620   66547 command_runner.go:130] >  Birth: 2023-10-26 00:45:51.462295391 +0000
	I1025 17:45:51.554637   66547 start.go:540] Will wait 60s for crictl version
	I1025 17:45:51.554688   66547 ssh_runner.go:195] Run: which crictl
	I1025 17:45:51.559315   66547 command_runner.go:130] > /usr/bin/crictl
	I1025 17:45:51.559370   66547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 17:45:51.606043   66547 command_runner.go:130] > Version:  0.1.0
	I1025 17:45:51.606056   66547 command_runner.go:130] > RuntimeName:  docker
	I1025 17:45:51.606060   66547 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1025 17:45:51.606068   66547 command_runner.go:130] > RuntimeApiVersion:  v1
	I1025 17:45:51.608169   66547 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 17:45:51.608268   66547 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 17:45:51.633657   66547 command_runner.go:130] > 24.0.6
	I1025 17:45:51.634797   66547 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 17:45:51.658929   66547 command_runner.go:130] > 24.0.6
	I1025 17:45:51.684604   66547 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 17:45:51.684756   66547 cli_runner.go:164] Run: docker exec -t functional-188000 dig +short host.docker.internal
	I1025 17:45:51.824395   66547 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 17:45:51.824498   66547 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 17:45:51.829703   66547 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1025 17:45:51.829845   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:51.881103   66547 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 17:45:51.881175   66547 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 17:45:51.900536   66547 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1025 17:45:51.900549   66547 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1025 17:45:51.900566   66547 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1025 17:45:51.900571   66547 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1025 17:45:51.900575   66547 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1025 17:45:51.900593   66547 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1025 17:45:51.900616   66547 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1025 17:45:51.900625   66547 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 17:45:51.901796   66547 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 17:45:51.901822   66547 docker.go:623] Images already preloaded, skipping extraction
	I1025 17:45:51.901910   66547 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 17:45:51.922427   66547 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1025 17:45:51.922440   66547 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1025 17:45:51.922444   66547 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1025 17:45:51.922450   66547 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1025 17:45:51.922470   66547 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1025 17:45:51.922479   66547 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1025 17:45:51.922484   66547 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1025 17:45:51.922492   66547 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 17:45:51.923705   66547 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 17:45:51.923726   66547 cache_images.go:84] Images are preloaded, skipping loading
	I1025 17:45:51.923805   66547 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 17:45:51.976031   66547 command_runner.go:130] > cgroupfs
	I1025 17:45:51.977337   66547 cni.go:84] Creating CNI manager for ""
	I1025 17:45:51.977352   66547 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 17:45:51.977368   66547 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 17:45:51.977381   66547 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-188000 NodeName:functional-188000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 17:45:51.977519   66547 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-188000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 17:45:51.977587   66547 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-188000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:functional-188000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1025 17:45:51.977652   66547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 17:45:51.986828   66547 command_runner.go:130] > kubeadm
	I1025 17:45:51.986840   66547 command_runner.go:130] > kubectl
	I1025 17:45:51.986844   66547 command_runner.go:130] > kubelet
	I1025 17:45:51.987608   66547 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 17:45:51.987670   66547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 17:45:51.996786   66547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 17:45:52.013733   66547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 17:45:52.031070   66547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1025 17:45:52.048179   66547 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 17:45:52.052588   66547 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1025 17:45:52.052630   66547 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000 for IP: 192.168.49.2
	I1025 17:45:52.052647   66547 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:45:52.052804   66547 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 17:45:52.052854   66547 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 17:45:52.052933   66547 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.key
	I1025 17:45:52.052995   66547 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/apiserver.key.dd3b5fb2
	I1025 17:45:52.053041   66547 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/proxy-client.key
	I1025 17:45:52.053050   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 17:45:52.053069   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 17:45:52.053094   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 17:45:52.053111   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 17:45:52.053128   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 17:45:52.053143   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 17:45:52.053171   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 17:45:52.053200   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 17:45:52.053305   66547 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 17:45:52.053339   66547 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 17:45:52.053350   66547 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 17:45:52.053381   66547 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 17:45:52.053412   66547 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 17:45:52.053453   66547 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 17:45:52.053521   66547 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 17:45:52.053552   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem -> /usr/share/ca-certificates/65292.pem
	I1025 17:45:52.053575   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /usr/share/ca-certificates/652922.pem
	I1025 17:45:52.053592   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:45:52.054086   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 17:45:52.076922   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 17:45:52.099904   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 17:45:52.128904   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 17:45:52.152002   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 17:45:52.174881   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 17:45:52.197639   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 17:45:52.220319   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 17:45:52.243650   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 17:45:52.266877   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 17:45:52.289601   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 17:45:52.312621   66547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 17:45:52.329975   66547 ssh_runner.go:195] Run: openssl version
	I1025 17:45:52.335786   66547 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1025 17:45:52.336008   66547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 17:45:52.346331   66547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:45:52.351222   66547 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:45:52.351241   66547 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:45:52.351283   66547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:45:52.358052   66547 command_runner.go:130] > b5213941
	I1025 17:45:52.358206   66547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 17:45:52.367944   66547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 17:45:52.377852   66547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 17:45:52.382321   66547 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 17:45:52.382334   66547 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 17:45:52.382373   66547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 17:45:52.389461   66547 command_runner.go:130] > 51391683
	I1025 17:45:52.389678   66547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 17:45:52.399052   66547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 17:45:52.409097   66547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 17:45:52.413564   66547 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 17:45:52.413647   66547 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 17:45:52.413692   66547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 17:45:52.420528   66547 command_runner.go:130] > 3ec20f2e
	I1025 17:45:52.420684   66547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 17:45:52.430431   66547 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 17:45:52.435076   66547 command_runner.go:130] > ca.crt
	I1025 17:45:52.435086   66547 command_runner.go:130] > ca.key
	I1025 17:45:52.435090   66547 command_runner.go:130] > healthcheck-client.crt
	I1025 17:45:52.435094   66547 command_runner.go:130] > healthcheck-client.key
	I1025 17:45:52.435099   66547 command_runner.go:130] > peer.crt
	I1025 17:45:52.435103   66547 command_runner.go:130] > peer.key
	I1025 17:45:52.435106   66547 command_runner.go:130] > server.crt
	I1025 17:45:52.435109   66547 command_runner.go:130] > server.key
	I1025 17:45:52.435173   66547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 17:45:52.442174   66547 command_runner.go:130] > Certificate will not expire
	I1025 17:45:52.442317   66547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 17:45:52.449012   66547 command_runner.go:130] > Certificate will not expire
	I1025 17:45:52.449196   66547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 17:45:52.455985   66547 command_runner.go:130] > Certificate will not expire
	I1025 17:45:52.456216   66547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 17:45:52.462741   66547 command_runner.go:130] > Certificate will not expire
	I1025 17:45:52.462923   66547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 17:45:52.469578   66547 command_runner.go:130] > Certificate will not expire
	I1025 17:45:52.469923   66547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 17:45:52.476788   66547 command_runner.go:130] > Certificate will not expire
	I1025 17:45:52.476832   66547 kubeadm.go:404] StartCluster: {Name:functional-188000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-188000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:45:52.476938   66547 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 17:45:52.496829   66547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 17:45:52.506504   66547 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1025 17:45:52.506515   66547 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1025 17:45:52.506520   66547 command_runner.go:130] > /var/lib/minikube/etcd:
	I1025 17:45:52.506523   66547 command_runner.go:130] > member
	I1025 17:45:52.506533   66547 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1025 17:45:52.506546   66547 kubeadm.go:636] restartCluster start
	I1025 17:45:52.506595   66547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 17:45:52.515848   66547 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:52.515931   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:52.569559   66547 kubeconfig.go:92] found "functional-188000" server: "https://127.0.0.1:56239"
	I1025 17:45:52.569934   66547 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:45:52.570126   66547 kapi.go:59] client config for functional-188000: &rest.Config{Host:"https://127.0.0.1:56239", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 17:45:52.570622   66547 cert_rotation.go:137] Starting client certificate rotation controller
	I1025 17:45:52.570796   66547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 17:45:52.580349   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:52.580404   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:52.591038   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:52.591057   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:52.591103   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:52.601570   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:53.101963   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:53.102218   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:53.115033   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:53.603778   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:53.604021   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:53.616635   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:54.102204   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:54.102301   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:54.114997   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:54.601714   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:54.601856   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:54.613179   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:55.103783   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:55.104030   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:55.116743   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:55.601962   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:55.602083   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:55.614008   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:56.103789   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:56.103986   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:56.117349   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:56.601950   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:56.602214   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:56.614821   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:57.101775   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:57.111517   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:57.143044   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:57.602179   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:57.602282   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:57.643548   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:58.102800   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:58.102905   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 17:45:58.146513   66547 command_runner.go:130] > 5688
	I1025 17:45:58.146615   66547 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5688/cgroup
	W1025 17:45:58.228791   66547 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5688/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:58.228907   66547 ssh_runner.go:195] Run: ls
	I1025 17:45:58.237805   66547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56239/healthz ...
	I1025 17:46:00.358801   66547 api_server.go:279] https://127.0.0.1:56239/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 17:46:00.358845   66547 retry.go:31] will retry after 251.807756ms: https://127.0.0.1:56239/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 17:46:00.611423   66547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56239/healthz ...
	I1025 17:46:00.629498   66547 api_server.go:279] https://127.0.0.1:56239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 17:46:00.629530   66547 retry.go:31] will retry after 358.051127ms: https://127.0.0.1:56239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 17:46:00.989350   66547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56239/healthz ...
	I1025 17:46:00.996835   66547 api_server.go:279] https://127.0.0.1:56239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 17:46:00.996858   66547 retry.go:31] will retry after 308.790425ms: https://127.0.0.1:56239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 17:46:01.307739   66547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56239/healthz ...
	I1025 17:46:01.314935   66547 api_server.go:279] https://127.0.0.1:56239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 17:46:01.314957   66547 retry.go:31] will retry after 445.51233ms: https://127.0.0.1:56239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 17:46:01.761530   66547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56239/healthz ...
	I1025 17:46:01.770260   66547 api_server.go:279] https://127.0.0.1:56239/healthz returned 200:
	ok
	I1025 17:46:01.770401   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods
	I1025 17:46:01.770408   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:01.770419   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:01.770427   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:01.829087   66547 round_trippers.go:574] Response Status: 200 OK in 58 milliseconds
	I1025 17:46:01.829154   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:01.829170   66547 round_trippers.go:580]     Audit-Id: b7d1cf3e-c721-48a5-bbe0-244b7bd61c9e
	I1025 17:46:01.829183   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:01.829192   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:01.829200   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:01.829206   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:01.829214   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:01 GMT
	I1025 17:46:01.829779   66547 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"399"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ff5ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7022509e-429b-40a1-95e2-ac3b980b2b1e","resourceVersion":"395","creationTimestamp":"2023-10-26T00:45:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ef2c2cc4-097f-444f-b52c-dfc3304565b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef2c2cc4-097f-444f-b52c-dfc3304565b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51058 chars]
	I1025 17:46:01.832402   66547 system_pods.go:86] 7 kube-system pods found
	I1025 17:46:01.832414   66547 system_pods.go:89] "coredns-5dd5756b68-ff5ll" [7022509e-429b-40a1-95e2-ac3b980b2b1e] Running
	I1025 17:46:01.832420   66547 system_pods.go:89] "etcd-functional-188000" [095a6b2c-e973-4dad-9409-01e79c7e3021] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 17:46:01.832426   66547 system_pods.go:89] "kube-apiserver-functional-188000" [6811c037-9ba7-49b2-9dc8-e7c835a205ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 17:46:01.832432   66547 system_pods.go:89] "kube-controller-manager-functional-188000" [000afba9-c176-4b7f-9674-24c20b7b1e92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 17:46:01.832441   66547 system_pods.go:89] "kube-proxy-bnvpn" [35c2ae14-426f-4a44-b88e-d3d88befe16f] Running
	I1025 17:46:01.832451   66547 system_pods.go:89] "kube-scheduler-functional-188000" [ac7541cf-a304-4933-acea-37c4f53f6710] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 17:46:01.832471   66547 system_pods.go:89] "storage-provisioner" [6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 17:46:01.832513   66547 round_trippers.go:463] GET https://127.0.0.1:56239/version
	I1025 17:46:01.832520   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:01.832528   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:01.832537   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:01.834045   66547 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 17:46:01.834055   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:01.834060   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:01 GMT
	I1025 17:46:01.834064   66547 round_trippers.go:580]     Audit-Id: a3d04dca-a785-46c9-93ef-676f69eaa058
	I1025 17:46:01.834069   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:01.834074   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:01.834082   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:01.834087   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:01.834092   66547 round_trippers.go:580]     Content-Length: 264
	I1025 17:46:01.834103   66547 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1025 17:46:01.834143   66547 api_server.go:141] control plane version: v1.28.3
	I1025 17:46:01.834151   66547 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I1025 17:46:01.834158   66547 kubeadm.go:684] Taking a shortcut, as the cluster seems to be properly configured
	I1025 17:46:01.834167   66547 kubeadm.go:640] restartCluster took 9.32733389s
	I1025 17:46:01.834173   66547 kubeadm.go:406] StartCluster complete in 9.357063265s
	I1025 17:46:01.834183   66547 settings.go:142] acquiring lock: {Name:mkca0a8fe84aa865309571104a1d51551b90d38c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:46:01.834265   66547 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:46:01.834706   66547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/kubeconfig: {Name:mka2fd80159d21a18312620daab0f942465327a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:46:01.834987   66547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 17:46:01.835003   66547 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 17:46:01.835042   66547 addons.go:69] Setting default-storageclass=true in profile "functional-188000"
	I1025 17:46:01.835058   66547 addons.go:69] Setting storage-provisioner=true in profile "functional-188000"
	I1025 17:46:01.835060   66547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-188000"
	I1025 17:46:01.835070   66547 addons.go:231] Setting addon storage-provisioner=true in "functional-188000"
	W1025 17:46:01.835074   66547 addons.go:240] addon storage-provisioner should already be in state true
	I1025 17:46:01.835111   66547 host.go:66] Checking if "functional-188000" exists ...
	I1025 17:46:01.835140   66547 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 17:46:01.835365   66547 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:46:01.835397   66547 cli_runner.go:164] Run: docker container inspect functional-188000 --format={{.State.Status}}
	I1025 17:46:01.835424   66547 cli_runner.go:164] Run: docker container inspect functional-188000 --format={{.State.Status}}
	I1025 17:46:01.836001   66547 kapi.go:59] client config for functional-188000: &rest.Config{Host:"https://127.0.0.1:56239", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 17:46:01.839059   66547 round_trippers.go:463] GET https://127.0.0.1:56239/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 17:46:01.839364   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:01.839372   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:01.839377   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:01.842538   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:01.842551   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:01.842556   66547 round_trippers.go:580]     Content-Length: 291
	I1025 17:46:01.842567   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:01 GMT
	I1025 17:46:01.842572   66547 round_trippers.go:580]     Audit-Id: 4f69ffef-e8dd-40ea-b79d-626c9b31a1c9
	I1025 17:46:01.842576   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:01.842580   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:01.842584   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:01.842588   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:01.842608   66547 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f89c5d55-82d9-44ba-90e8-9c480cde91ad","resourceVersion":"378","creationTimestamp":"2023-10-26T00:45:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1025 17:46:01.842732   66547 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-188000" context rescaled to 1 replicas
	I1025 17:46:01.842754   66547 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 17:46:01.865936   66547 out.go:177] * Verifying Kubernetes components...
	I1025 17:46:01.909141   66547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 17:46:01.915352   66547 command_runner.go:130] > apiVersion: v1
	I1025 17:46:01.915369   66547 command_runner.go:130] > data:
	I1025 17:46:01.915374   66547 command_runner.go:130] >   Corefile: |
	I1025 17:46:01.915381   66547 command_runner.go:130] >     .:53 {
	I1025 17:46:01.915387   66547 command_runner.go:130] >         log
	I1025 17:46:01.915396   66547 command_runner.go:130] >         errors
	I1025 17:46:01.915409   66547 command_runner.go:130] >         health {
	I1025 17:46:01.915422   66547 command_runner.go:130] >            lameduck 5s
	I1025 17:46:01.915429   66547 command_runner.go:130] >         }
	I1025 17:46:01.915437   66547 command_runner.go:130] >         ready
	I1025 17:46:01.915446   66547 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1025 17:46:01.915452   66547 command_runner.go:130] >            pods insecure
	I1025 17:46:01.915462   66547 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1025 17:46:01.915470   66547 command_runner.go:130] >            ttl 30
	I1025 17:46:01.915475   66547 command_runner.go:130] >         }
	I1025 17:46:01.915481   66547 command_runner.go:130] >         prometheus :9153
	I1025 17:46:01.915486   66547 command_runner.go:130] >         hosts {
	I1025 17:46:01.915493   66547 command_runner.go:130] >            192.168.65.254 host.minikube.internal
	I1025 17:46:01.915498   66547 command_runner.go:130] >            fallthrough
	I1025 17:46:01.915503   66547 command_runner.go:130] >         }
	I1025 17:46:01.915509   66547 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1025 17:46:01.915517   66547 command_runner.go:130] >            max_concurrent 1000
	I1025 17:46:01.915526   66547 command_runner.go:130] >         }
	I1025 17:46:01.915535   66547 command_runner.go:130] >         cache 30
	I1025 17:46:01.915549   66547 command_runner.go:130] >         loop
	I1025 17:46:01.915570   66547 command_runner.go:130] >         reload
	I1025 17:46:01.915579   66547 command_runner.go:130] >         loadbalance
	I1025 17:46:01.915583   66547 command_runner.go:130] >     }
	I1025 17:46:01.915587   66547 command_runner.go:130] > kind: ConfigMap
	I1025 17:46:01.915590   66547 command_runner.go:130] > metadata:
	I1025 17:46:01.915594   66547 command_runner.go:130] >   creationTimestamp: "2023-10-26T00:45:19Z"
	I1025 17:46:01.915599   66547 command_runner.go:130] >   name: coredns
	I1025 17:46:01.915603   66547 command_runner.go:130] >   namespace: kube-system
	I1025 17:46:01.915607   66547 command_runner.go:130] >   resourceVersion: "345"
	I1025 17:46:01.915611   66547 command_runner.go:130] >   uid: 48ba2b39-a203-4bbd-acc5-ba1dc75f42a6
	I1025 17:46:01.915686   66547 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1025 17:46:01.938009   66547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 17:46:01.917693   66547 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:46:01.923980   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-188000
	I1025 17:46:01.938222   66547 kapi.go:59] client config for functional-188000: &rest.Config{Host:"https://127.0.0.1:56239", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 17:46:01.959114   66547 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 17:46:01.959134   66547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 17:46:01.959234   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:46:01.960405   66547 addons.go:231] Setting addon default-storageclass=true in "functional-188000"
	W1025 17:46:01.960518   66547 addons.go:240] addon default-storageclass should already be in state true
	I1025 17:46:01.960578   66547 host.go:66] Checking if "functional-188000" exists ...
	I1025 17:46:01.963542   66547 cli_runner.go:164] Run: docker container inspect functional-188000 --format={{.State.Status}}
	I1025 17:46:02.020144   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:46:02.020152   66547 node_ready.go:35] waiting up to 6m0s for node "functional-188000" to be "Ready" ...
	I1025 17:46:02.020260   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:02.020291   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.020298   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.020303   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.020606   66547 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 17:46:02.020617   66547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 17:46:02.020688   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:46:02.024270   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:02.024294   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.024300   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.024305   66547 round_trippers.go:580]     Audit-Id: d5cd2377-1809-4d0c-9a6e-e21462f60e30
	I1025 17:46:02.024310   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.024315   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.024321   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.024327   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.024415   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:02.024893   66547 node_ready.go:49] node "functional-188000" has status "Ready":"True"
	I1025 17:46:02.024907   66547 node_ready.go:38] duration metric: took 4.722781ms waiting for node "functional-188000" to be "Ready" ...
	I1025 17:46:02.024915   66547 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 17:46:02.024974   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods
	I1025 17:46:02.024979   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.024986   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.024991   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.028922   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:02.028944   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.028957   66547 round_trippers.go:580]     Audit-Id: 6870b210-07a0-4cdf-895d-dc2f65b016ec
	I1025 17:46:02.028999   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.029017   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.029036   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.029046   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.029057   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.029782   66547 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"399"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ff5ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7022509e-429b-40a1-95e2-ac3b980b2b1e","resourceVersion":"395","creationTimestamp":"2023-10-26T00:45:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ef2c2cc4-097f-444f-b52c-dfc3304565b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef2c2cc4-097f-444f-b52c-dfc3304565b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51058 chars]
	I1025 17:46:02.031320   66547 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ff5ll" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:02.031393   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ff5ll
	I1025 17:46:02.031404   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.031414   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.031420   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.034955   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:02.034983   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.034995   66547 round_trippers.go:580]     Audit-Id: 8941c6e4-7e83-4117-92dc-ff11d13d7b99
	I1025 17:46:02.035007   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.035022   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.035034   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.035040   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.035045   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.035336   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ff5ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7022509e-429b-40a1-95e2-ac3b980b2b1e","resourceVersion":"395","creationTimestamp":"2023-10-26T00:45:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ef2c2cc4-097f-444f-b52c-dfc3304565b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef2c2cc4-097f-444f-b52c-dfc3304565b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6154 chars]
	I1025 17:46:02.035679   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:02.035687   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.035695   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.035700   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.038803   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:02.038816   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.038822   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.038827   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.038831   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.038836   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.038841   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.038847   66547 round_trippers.go:580]     Audit-Id: fa3499a0-907a-4f69-bf58-86d859239ead
	I1025 17:46:02.038908   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:02.039127   66547 pod_ready.go:92] pod "coredns-5dd5756b68-ff5ll" in "kube-system" namespace has status "Ready":"True"
	I1025 17:46:02.039136   66547 pod_ready.go:81] duration metric: took 7.80005ms waiting for pod "coredns-5dd5756b68-ff5ll" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:02.039144   66547 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:02.039185   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:02.039190   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.039197   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.039202   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.042909   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:02.042923   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.042929   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.042941   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.042946   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.042952   66547 round_trippers.go:580]     Audit-Id: e612e10d-4453-4014-b3c7-1e0574e7662a
	I1025 17:46:02.042967   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.042973   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.043042   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:02.043364   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:02.043371   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.043378   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.043384   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.046303   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:02.046315   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.046321   66547 round_trippers.go:580]     Audit-Id: 3c8df985-1317-473c-97c0-a64ffede3a3f
	I1025 17:46:02.046328   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.046334   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.046339   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.046344   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.046349   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.046410   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:02.046661   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:02.046668   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.046675   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.046681   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.049321   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:02.049336   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.049341   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.049350   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.049355   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.049360   66547 round_trippers.go:580]     Audit-Id: f2334a50-e7dd-4955-96e5-7c1c426c0d9f
	I1025 17:46:02.049365   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.049370   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.049460   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:02.049785   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:02.049795   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.049802   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.049808   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.052679   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:02.052692   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.052701   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.052706   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.052712   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.052717   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.052722   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.052728   66547 round_trippers.go:580]     Audit-Id: d69bff72-c316-4f60-81c0-4d35e5ba8bbe
	I1025 17:46:02.052794   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:02.079377   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:46:02.122395   66547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 17:46:02.183318   66547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 17:46:02.553345   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:02.553364   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.553374   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.553385   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.558558   66547 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 17:46:02.558580   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.558588   66547 round_trippers.go:580]     Audit-Id: 949d03a1-ebf5-4f0c-a246-3dd81e422c32
	I1025 17:46:02.558593   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.558598   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.558609   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.558617   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.558622   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.558718   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:02.559048   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:02.559060   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.559071   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.559086   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.562217   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:02.562240   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.562246   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.562251   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.562255   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.562260   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.562264   66547 round_trippers.go:580]     Audit-Id: 79bd8c4c-fd44-4540-9a4f-27fec52f6fdd
	I1025 17:46:02.562275   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.562364   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:03.039187   66547 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I1025 17:46:03.041977   66547 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I1025 17:46:03.045205   66547 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1025 17:46:03.048248   66547 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1025 17:46:03.050564   66547 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I1025 17:46:03.053559   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:03.053568   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:03.053575   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:03.053581   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:03.056365   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:03.056390   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:03.056414   66547 round_trippers.go:580]     Audit-Id: f68121e6-0fd6-41c4-9cb9-e5b7c81b06d1
	I1025 17:46:03.056430   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:03.056436   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:03.056442   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:03.056448   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:03.056454   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:03 GMT
	I1025 17:46:03.056534   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:03.056810   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:03.056817   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:03.056827   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:03.056839   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:03.057384   66547 command_runner.go:130] > pod/storage-provisioner configured
	I1025 17:46:03.059444   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:03.059454   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:03.059460   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:03.059464   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:03.059469   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:03 GMT
	I1025 17:46:03.059476   66547 round_trippers.go:580]     Audit-Id: 313be3bd-5566-495d-b5fa-8963c57c9536
	I1025 17:46:03.059483   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:03.059491   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:03.059635   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:03.061389   66547 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I1025 17:46:03.061456   66547 round_trippers.go:463] GET https://127.0.0.1:56239/apis/storage.k8s.io/v1/storageclasses
	I1025 17:46:03.061461   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:03.061467   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:03.061473   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:03.063978   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:03.063986   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:03.063992   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:03.063996   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:03.064001   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:03.064006   66547 round_trippers.go:580]     Content-Length: 1273
	I1025 17:46:03.064011   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:03 GMT
	I1025 17:46:03.064016   66547 round_trippers.go:580]     Audit-Id: d00f9678-751d-484f-bad3-97f3f59a72d1
	I1025 17:46:03.064021   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:03.064040   66547 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"458"},"items":[{"metadata":{"name":"standard","uid":"6d60752b-781b-494a-b9bb-a1159bed062b","resourceVersion":"344","creationTimestamp":"2023-10-26T00:45:33Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T00:45:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1025 17:46:03.064343   66547 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"6d60752b-781b-494a-b9bb-a1159bed062b","resourceVersion":"344","creationTimestamp":"2023-10-26T00:45:33Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T00:45:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 17:46:03.064369   66547 round_trippers.go:463] PUT https://127.0.0.1:56239/apis/storage.k8s.io/v1/storageclasses/standard
	I1025 17:46:03.064373   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:03.064380   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:03.064386   66547 round_trippers.go:473]     Content-Type: application/json
	I1025 17:46:03.064390   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:03.067390   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:03.067406   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:03.067412   66547 round_trippers.go:580]     Content-Length: 1220
	I1025 17:46:03.067418   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:03 GMT
	I1025 17:46:03.067422   66547 round_trippers.go:580]     Audit-Id: 57315fd0-28a8-4710-8cf7-1645ee03e1a6
	I1025 17:46:03.067428   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:03.067433   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:03.067437   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:03.067442   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:03.067462   66547 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"6d60752b-781b-494a-b9bb-a1159bed062b","resourceVersion":"344","creationTimestamp":"2023-10-26T00:45:33Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T00:45:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 17:46:03.113094   66547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 17:46:03.134830   66547 addons.go:502] enable addons completed in 1.299788878s: enabled=[storage-provisioner default-storageclass]
	I1025 17:46:03.553574   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:03.553590   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:03.553596   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:03.553601   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:03.556296   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:03.556307   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:03.556312   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:03.556317   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:03 GMT
	I1025 17:46:03.556322   66547 round_trippers.go:580]     Audit-Id: 865eb1ed-d7d6-46f2-8407-56534b3b7398
	I1025 17:46:03.556326   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:03.556331   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:03.556336   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:03.556418   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:03.556664   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:03.556670   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:03.556675   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:03.556680   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:03.559012   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:03.559022   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:03.559027   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:03.559037   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:03 GMT
	I1025 17:46:03.559043   66547 round_trippers.go:580]     Audit-Id: 49b5759c-6b3f-4239-b484-305075b508db
	I1025 17:46:03.559047   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:03.559052   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:03.559057   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:03.559107   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:04.055351   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:04.055372   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:04.055384   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:04.055393   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:04.059135   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:04.059146   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:04.059151   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:04.059156   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:04.059161   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:04.059166   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:04 GMT
	I1025 17:46:04.059170   66547 round_trippers.go:580]     Audit-Id: e6f7d332-9c72-4d6c-87ad-ec6327c1f9ff
	I1025 17:46:04.059175   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:04.059276   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:04.059539   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:04.059548   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:04.059554   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:04.059560   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:04.061899   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:04.061909   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:04.061914   66547 round_trippers.go:580]     Audit-Id: 970abfcd-d870-4d67-b2de-7f4be1b88964
	I1025 17:46:04.061925   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:04.061930   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:04.061935   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:04.061940   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:04.061944   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:04 GMT
	I1025 17:46:04.062121   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:04.062297   66547 pod_ready.go:102] pod "etcd-functional-188000" in "kube-system" namespace has status "Ready":"False"
	I1025 17:46:04.553311   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:04.553329   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:04.553338   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:04.553345   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:04.557062   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:04.557075   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:04.557081   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:04.557086   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:04.557115   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:04 GMT
	I1025 17:46:04.557126   66547 round_trippers.go:580]     Audit-Id: f4703e77-a3c3-4051-850e-1da515e3b30f
	I1025 17:46:04.557134   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:04.557140   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:04.557336   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:04.557644   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:04.557659   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:04.557670   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:04.557679   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:04.560596   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:04.560609   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:04.560615   66547 round_trippers.go:580]     Audit-Id: 10cf3519-abd2-4c64-a7e5-e86a4e1830aa
	I1025 17:46:04.560620   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:04.560625   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:04.560629   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:04.560634   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:04.560639   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:04 GMT
	I1025 17:46:04.560697   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:05.053293   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:05.053318   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:05.053340   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:05.053351   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:05.057813   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:05.057825   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:05.057831   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:05.057841   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:05.057846   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:05.057851   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:05.057856   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:05 GMT
	I1025 17:46:05.057864   66547 round_trippers.go:580]     Audit-Id: 6b7747e2-11d7-4e3d-a701-db49b78b9b6b
	I1025 17:46:05.057948   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:05.058212   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:05.058222   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:05.058228   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:05.058233   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:05.060850   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:05.060859   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:05.060871   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:05 GMT
	I1025 17:46:05.060877   66547 round_trippers.go:580]     Audit-Id: bc40d7ca-3032-4389-999b-37cbb81a09cc
	I1025 17:46:05.060881   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:05.060886   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:05.060891   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:05.060895   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:05.060946   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:05.553727   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:05.553747   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:05.553765   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:05.553775   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:05.557499   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:05.557517   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:05.557523   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:05.557528   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:05.557533   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:05.557537   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:05.557542   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:05 GMT
	I1025 17:46:05.557552   66547 round_trippers.go:580]     Audit-Id: 5708742a-e204-41ef-a503-668201cc4ef7
	I1025 17:46:05.557636   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:05.557889   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:05.557897   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:05.557903   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:05.557907   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:05.560643   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:05.560653   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:05.560659   66547 round_trippers.go:580]     Audit-Id: 174c83a1-fe18-466e-9ab9-e9693657189c
	I1025 17:46:05.560664   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:05.560668   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:05.560674   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:05.560680   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:05.560686   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:05 GMT
	I1025 17:46:05.560732   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:06.054772   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:06.054794   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:06.054806   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:06.054816   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:06.059228   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:06.059241   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:06.059246   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:06.059251   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:06 GMT
	I1025 17:46:06.059256   66547 round_trippers.go:580]     Audit-Id: e34061de-7f6b-4947-8973-e6ee6078f6aa
	I1025 17:46:06.059267   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:06.059273   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:06.059277   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:06.059347   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:06.059591   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:06.059599   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:06.059604   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:06.059609   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:06.061895   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:06.061906   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:06.061912   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:06 GMT
	I1025 17:46:06.061917   66547 round_trippers.go:580]     Audit-Id: 35218bf7-13b4-4920-9749-41fdcd46c00d
	I1025 17:46:06.061922   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:06.061926   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:06.061930   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:06.061938   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:06.061991   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:06.553589   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:06.553614   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:06.553626   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:06.553635   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:06.557639   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:06.557657   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:06.557669   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:06.557682   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:06 GMT
	I1025 17:46:06.557689   66547 round_trippers.go:580]     Audit-Id: ab3faf48-bfaf-4343-a8f5-23f9c1f06ea3
	I1025 17:46:06.557695   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:06.557700   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:06.557704   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:06.557784   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:06.558060   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:06.558070   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:06.558079   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:06.558104   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:06.561227   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:06.561239   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:06.561253   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:06.561262   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:06.561266   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:06.561272   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:06.561277   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:06 GMT
	I1025 17:46:06.561283   66547 round_trippers.go:580]     Audit-Id: fe3f9726-95d4-4d20-91c2-d022bdf6c86b
	I1025 17:46:06.561345   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:06.561547   66547 pod_ready.go:102] pod "etcd-functional-188000" in "kube-system" namespace has status "Ready":"False"
	I1025 17:46:07.053649   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:07.053671   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:07.053683   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:07.053693   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:07.058305   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:07.058317   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:07.058323   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:07.058329   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:07.058333   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:07.058339   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:07 GMT
	I1025 17:46:07.058349   66547 round_trippers.go:580]     Audit-Id: 00a571b5-5f08-42f3-9c43-8399a1c77c52
	I1025 17:46:07.058354   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:07.058443   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:07.058711   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:07.058717   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:07.058725   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:07.058731   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:07.061317   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:07.061327   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:07.061334   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:07.061342   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:07.061349   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:07.061354   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:07 GMT
	I1025 17:46:07.061358   66547 round_trippers.go:580]     Audit-Id: e79130af-3e90-410d-83fa-b541da60e340
	I1025 17:46:07.061363   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:07.061422   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:07.554161   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:07.554183   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:07.554194   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:07.554204   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:07.558461   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:07.558480   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:07.558488   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:07.558494   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:07 GMT
	I1025 17:46:07.558501   66547 round_trippers.go:580]     Audit-Id: d53e21b6-ed1d-43b0-81ca-9e3beed52379
	I1025 17:46:07.558507   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:07.558513   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:07.558519   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:07.558614   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:07.558958   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:07.558981   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:07.558987   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:07.558992   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:07.561252   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:07.561262   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:07.561267   66547 round_trippers.go:580]     Audit-Id: 1c958ae0-3e23-48ae-a573-40b9d5022235
	I1025 17:46:07.561271   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:07.561277   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:07.561281   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:07.561289   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:07.561293   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:07 GMT
	I1025 17:46:07.561344   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:08.054101   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:08.054124   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:08.054136   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:08.054146   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:08.058386   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:08.058398   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:08.058404   66547 round_trippers.go:580]     Audit-Id: 887ddb4c-6b63-4cf1-bbcc-ced84996b1f1
	I1025 17:46:08.058408   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:08.058413   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:08.058419   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:08.058423   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:08.058427   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:08 GMT
	I1025 17:46:08.058562   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:08.058831   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:08.058839   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:08.058853   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:08.058859   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:08.061229   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:08.061240   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:08.061246   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:08.061256   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:08.061262   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:08 GMT
	I1025 17:46:08.061267   66547 round_trippers.go:580]     Audit-Id: 8f1c8539-2e1b-4b8a-9422-80a903b3915b
	I1025 17:46:08.061272   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:08.061276   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:08.061332   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:08.553462   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:08.553484   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:08.553496   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:08.553506   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:08.557800   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:08.557812   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:08.557818   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:08.557822   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:08.557826   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:08 GMT
	I1025 17:46:08.557831   66547 round_trippers.go:580]     Audit-Id: 2ea19256-c604-487c-b853-9efdfeb7e08c
	I1025 17:46:08.557836   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:08.557841   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:08.557922   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:08.558180   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:08.558186   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:08.558192   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:08.558196   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:08.560778   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:08.560788   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:08.560793   66547 round_trippers.go:580]     Audit-Id: cb66681b-0282-480b-8414-d815083c64de
	I1025 17:46:08.560798   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:08.560805   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:08.560810   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:08.560815   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:08.560826   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:08 GMT
	I1025 17:46:08.560972   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:09.055448   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:09.055470   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:09.055482   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:09.055492   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:09.060067   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:09.060081   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:09.060089   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:09.060094   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:09.060107   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:09 GMT
	I1025 17:46:09.060113   66547 round_trippers.go:580]     Audit-Id: c3239eea-8614-4f5c-9fd9-aa6c2c1c4bf7
	I1025 17:46:09.060117   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:09.060122   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:09.060233   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:09.060488   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:09.060495   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:09.060502   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:09.060509   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:09.062675   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:09.062687   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:09.062695   66547 round_trippers.go:580]     Audit-Id: 38bbad82-d6d0-42fe-9dab-bdcbd7960a0c
	I1025 17:46:09.062702   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:09.062709   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:09.062714   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:09.062718   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:09.062723   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:09 GMT
	I1025 17:46:09.062839   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:09.063014   66547 pod_ready.go:102] pod "etcd-functional-188000" in "kube-system" namespace has status "Ready":"False"
	I1025 17:46:09.553472   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:09.553489   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:09.553497   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:09.553505   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:09.557011   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:09.557022   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:09.557028   66547 round_trippers.go:580]     Audit-Id: 659aab0a-50a9-49f9-8180-a6663a129ec7
	I1025 17:46:09.557036   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:09.557042   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:09.557047   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:09.557051   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:09.557056   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:09 GMT
	I1025 17:46:09.557142   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:09.557393   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:09.557400   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:09.557405   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:09.557411   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:09.559760   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:09.559770   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:09.559775   66547 round_trippers.go:580]     Audit-Id: 7b30c08e-98b0-4ee4-a631-e83b1e716d93
	I1025 17:46:09.559780   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:09.559785   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:09.559793   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:09.559799   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:09.559803   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:09 GMT
	I1025 17:46:09.559852   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:10.054170   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:10.054187   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:10.054196   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:10.054203   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:10.057437   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:10.057448   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:10.057454   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:10 GMT
	I1025 17:46:10.057459   66547 round_trippers.go:580]     Audit-Id: 280b56f7-61a4-4ad8-be90-a83e28ef83df
	I1025 17:46:10.057463   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:10.057469   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:10.057473   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:10.057478   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:10.057556   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:10.057814   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:10.057821   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:10.057828   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:10.057835   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:10.060112   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:10.060121   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:10.060126   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:10.060139   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:10.060145   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:10.060149   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:10 GMT
	I1025 17:46:10.060154   66547 round_trippers.go:580]     Audit-Id: a696ff4d-c2f9-4026-9955-092b57e65c55
	I1025 17:46:10.060159   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:10.060211   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:10.554186   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:10.554208   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:10.554222   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:10.554232   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:10.557308   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:10.557325   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:10.557331   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:10.557335   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:10.557340   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:10.557345   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:10 GMT
	I1025 17:46:10.557350   66547 round_trippers.go:580]     Audit-Id: 7f7d0ae3-d12a-4554-a1b4-1d882384e1b8
	I1025 17:46:10.557355   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:10.557442   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:10.557719   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:10.557725   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:10.557731   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:10.557736   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:10.560336   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:10.560347   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:10.560352   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:10.560357   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:10.560361   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:10.560366   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:10.560371   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:10 GMT
	I1025 17:46:10.560375   66547 round_trippers.go:580]     Audit-Id: 8094dea4-4cda-4630-94c7-50b17b72ab31
	I1025 17:46:10.560430   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:11.055174   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:11.055195   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:11.055207   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:11.055216   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:11.059613   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:11.059626   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:11.059631   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:11 GMT
	I1025 17:46:11.059635   66547 round_trippers.go:580]     Audit-Id: e5eef706-ceab-4449-88b2-5cebb50464e7
	I1025 17:46:11.059640   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:11.059645   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:11.059650   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:11.059654   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:11.059772   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:11.060020   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:11.060026   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:11.060032   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:11.060037   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:11.062410   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:11.062420   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:11.062426   66547 round_trippers.go:580]     Audit-Id: 2adee1dd-0f79-47c6-835f-d66df593e0d7
	I1025 17:46:11.062431   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:11.062436   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:11.062441   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:11.062446   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:11.062450   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:11 GMT
	I1025 17:46:11.062508   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:11.555434   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:11.555453   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:11.555465   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:11.555492   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:11.558771   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:11.558782   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:11.558788   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:11.558797   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:11.558802   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:11.558811   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:11 GMT
	I1025 17:46:11.558816   66547 round_trippers.go:580]     Audit-Id: e2b3f75d-6c3c-4acf-a15a-bf92f30adea2
	I1025 17:46:11.558820   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:11.558918   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:11.559179   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:11.559185   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:11.559191   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:11.559197   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:11.561672   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:11.561682   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:11.561687   66547 round_trippers.go:580]     Audit-Id: df3a6688-65b5-43f6-86e3-cd7c20e21f53
	I1025 17:46:11.561693   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:11.561698   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:11.561702   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:11.561707   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:11.561712   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:11 GMT
	I1025 17:46:11.561781   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:11.561975   66547 pod_ready.go:102] pod "etcd-functional-188000" in "kube-system" namespace has status "Ready":"False"
	I1025 17:46:12.055441   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:12.055461   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:12.055472   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:12.055481   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:12.059828   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:12.059843   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:12.059850   66547 round_trippers.go:580]     Audit-Id: e8a2e750-7848-43c8-b283-bc7345156427
	I1025 17:46:12.059857   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:12.059864   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:12.059871   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:12.059877   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:12.059884   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:12 GMT
	I1025 17:46:12.059982   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:12.060267   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:12.060274   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:12.060279   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:12.060284   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:12.062617   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:12.062627   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:12.062632   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:12.062637   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:12.062645   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:12.062653   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:12.062658   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:12 GMT
	I1025 17:46:12.062663   66547 round_trippers.go:580]     Audit-Id: 9f9018fb-f9da-4ffe-ae03-831547ab52c3
	I1025 17:46:12.062714   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:12.553490   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:12.553502   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:12.553509   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:12.553514   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:12.556087   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:12.556100   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:12.556106   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:12 GMT
	I1025 17:46:12.556116   66547 round_trippers.go:580]     Audit-Id: 73e43628-53b2-4953-a87b-bbe171b78108
	I1025 17:46:12.556121   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:12.556126   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:12.556131   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:12.556136   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:12.556224   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:12.556500   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:12.556513   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:12.556527   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:12.556537   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:12.559214   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:12.559224   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:12.559230   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:12 GMT
	I1025 17:46:12.559235   66547 round_trippers.go:580]     Audit-Id: f8768872-c339-45b8-837c-95ad5ee29477
	I1025 17:46:12.559239   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:12.559245   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:12.559249   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:12.559254   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:12.559316   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:13.054989   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:13.055006   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:13.055014   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:13.055021   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:13.058178   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:13.058189   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:13.058195   66547 round_trippers.go:580]     Audit-Id: 8b316948-7112-442c-a6ce-289a0ee21e6e
	I1025 17:46:13.058199   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:13.058203   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:13.058207   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:13.058212   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:13.058216   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:13 GMT
	I1025 17:46:13.058353   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:13.058613   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:13.058621   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:13.058627   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:13.058632   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:13.061234   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:13.061247   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:13.061254   66547 round_trippers.go:580]     Audit-Id: b970d2a4-573a-4ded-902d-1609df520a57
	I1025 17:46:13.061258   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:13.061263   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:13.061268   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:13.061273   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:13.061277   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:13 GMT
	I1025 17:46:13.061326   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:13.553861   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:13.553873   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:13.553880   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:13.553885   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:13.556689   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:13.556704   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:13.556710   66547 round_trippers.go:580]     Audit-Id: 7aeda20d-0237-4068-ba38-71d551d5a3be
	I1025 17:46:13.556716   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:13.556722   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:13.556726   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:13.556731   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:13.556736   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:13 GMT
	I1025 17:46:13.556823   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:13.557074   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:13.557081   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:13.557086   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:13.557091   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:13.559445   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:13.559454   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:13.559459   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:13.559463   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:13.559468   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:13.559476   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:13 GMT
	I1025 17:46:13.559482   66547 round_trippers.go:580]     Audit-Id: 843ca625-3c1e-40e6-871b-62fe676745bc
	I1025 17:46:13.559486   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:13.559536   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:14.053913   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:14.053936   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:14.053949   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:14.053959   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:14.058222   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:14.058236   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:14.058241   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:14.058246   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:14.058253   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:14.058260   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:14 GMT
	I1025 17:46:14.058265   66547 round_trippers.go:580]     Audit-Id: 865d1ff4-d333-4d45-87d2-ebc2dd54b3a0
	I1025 17:46:14.058275   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:14.058344   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:14.058585   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:14.058596   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:14.058602   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:14.058607   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:14.061173   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:14.061183   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:14.061190   66547 round_trippers.go:580]     Audit-Id: 9fd6e6d7-3d8a-4a15-be25-57b70fde0432
	I1025 17:46:14.061194   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:14.061199   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:14.061204   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:14.061215   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:14.061221   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:14 GMT
	I1025 17:46:14.061278   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:14.061470   66547 pod_ready.go:102] pod "etcd-functional-188000" in "kube-system" namespace has status "Ready":"False"
	I1025 17:46:14.554852   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:14.554872   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:14.554883   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:14.554892   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:14.559391   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:14.559401   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:14.559412   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:14.559418   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:14.559423   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:14.559428   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:14 GMT
	I1025 17:46:14.559439   66547 round_trippers.go:580]     Audit-Id: ea0b7054-23d0-4329-87ba-1943779cd292
	I1025 17:46:14.559444   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:14.559523   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:14.559773   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:14.559779   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:14.559785   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:14.559790   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:14.562293   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:14.562303   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:14.562308   66547 round_trippers.go:580]     Audit-Id: f78bdd81-304f-4dcd-a6d8-f34685ea99ee
	I1025 17:46:14.562313   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:14.562318   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:14.562323   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:14.562328   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:14.562334   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:14 GMT
	I1025 17:46:14.562383   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.053819   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:15.053839   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.053851   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.053861   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.058579   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:15.058589   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.058595   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.058599   66547 round_trippers.go:580]     Audit-Id: 74abae54-baab-406a-94ee-1aa8a7b116bc
	I1025 17:46:15.058604   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.058609   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.058613   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.058618   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.058688   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"469","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6066 chars]
	I1025 17:46:15.058951   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.058957   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.058963   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.058968   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.061366   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.061379   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.061388   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.061397   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.061403   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.061408   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.061412   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.061418   66547 round_trippers.go:580]     Audit-Id: bd320442-f000-417a-9183-1a9852c5d3d6
	I1025 17:46:15.061516   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.061707   66547 pod_ready.go:92] pod "etcd-functional-188000" in "kube-system" namespace has status "Ready":"True"
	I1025 17:46:15.061714   66547 pod_ready.go:81] duration metric: took 13.022175482s waiting for pod "etcd-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.061724   66547 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.061755   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-188000
	I1025 17:46:15.061760   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.061765   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.061771   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.063940   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.063949   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.063954   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.063959   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.063969   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.063974   66547 round_trippers.go:580]     Audit-Id: 850cde05-a797-40a3-80c3-1eae0db57c8d
	I1025 17:46:15.063979   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.063984   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.064056   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-188000","namespace":"kube-system","uid":"6811c037-9ba7-49b2-9dc8-e7c835a205ee","resourceVersion":"460","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"0f3f9f77e1fc8a12cf1621823498272c","kubernetes.io/config.mirror":"0f3f9f77e1fc8a12cf1621823498272c","kubernetes.io/config.seen":"2023-10-26T00:45:19.375266605Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8448 chars]
	I1025 17:46:15.064322   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.064329   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.064334   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.064340   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.066647   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.066656   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.066686   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.066692   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.066697   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.066701   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.066706   66547 round_trippers.go:580]     Audit-Id: 9ef039f5-0771-4f52-8e52-5a9f73ca43ba
	I1025 17:46:15.066711   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.066774   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.066939   66547 pod_ready.go:92] pod "kube-apiserver-functional-188000" in "kube-system" namespace has status "Ready":"True"
	I1025 17:46:15.066945   66547 pod_ready.go:81] duration metric: took 5.216443ms waiting for pod "kube-apiserver-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.066951   66547 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.066983   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-188000
	I1025 17:46:15.066988   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.066993   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.066998   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.069468   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.069477   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.069482   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.069487   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.069492   66547 round_trippers.go:580]     Audit-Id: cb573095-3c2f-44e0-bf68-53836dcd873d
	I1025 17:46:15.069496   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.069501   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.069506   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.069579   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-188000","namespace":"kube-system","uid":"000afba9-c176-4b7f-9674-24c20b7b1e92","resourceVersion":"465","creationTimestamp":"2023-10-26T00:45:17Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a5cba45956bd26c7fcaab9a2058286e","kubernetes.io/config.mirror":"1a5cba45956bd26c7fcaab9a2058286e","kubernetes.io/config.seen":"2023-10-26T00:45:13.501886918Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8021 chars]
	I1025 17:46:15.069836   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.069843   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.069852   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.069858   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.072107   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.072117   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.072123   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.072128   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.072133   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.072138   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.072142   66547 round_trippers.go:580]     Audit-Id: 119d9906-4821-48a4-ae88-535d42729f96
	I1025 17:46:15.072147   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.072212   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.072414   66547 pod_ready.go:92] pod "kube-controller-manager-functional-188000" in "kube-system" namespace has status "Ready":"True"
	I1025 17:46:15.072428   66547 pod_ready.go:81] duration metric: took 5.469911ms waiting for pod "kube-controller-manager-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.072443   66547 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bnvpn" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.072484   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-proxy-bnvpn
	I1025 17:46:15.072490   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.072496   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.072500   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.074917   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.074926   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.074932   66547 round_trippers.go:580]     Audit-Id: decd6794-8dbf-4ba6-9cc7-185c7f37c6e6
	I1025 17:46:15.074937   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.074943   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.074948   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.074953   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.074957   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.075015   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bnvpn","generateName":"kube-proxy-","namespace":"kube-system","uid":"35c2ae14-426f-4a44-b88e-d3d88befe16f","resourceVersion":"389","creationTimestamp":"2023-10-26T00:45:32Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"85b94970-c74c-4b8b-b6dd-957621f9c519","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85b94970-c74c-4b8b-b6dd-957621f9c519\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I1025 17:46:15.075250   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.075257   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.075262   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.075270   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.077786   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.077795   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.077801   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.077805   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.077811   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.077816   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.077821   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.077826   66547 round_trippers.go:580]     Audit-Id: db80ddc9-6928-4174-9761-32404555f696
	I1025 17:46:15.077887   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.078057   66547 pod_ready.go:92] pod "kube-proxy-bnvpn" in "kube-system" namespace has status "Ready":"True"
	I1025 17:46:15.078063   66547 pod_ready.go:81] duration metric: took 5.613315ms waiting for pod "kube-proxy-bnvpn" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.078069   66547 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.078101   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000
	I1025 17:46:15.078105   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.078111   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.078116   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.080390   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.080399   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.080408   66547 round_trippers.go:580]     Audit-Id: 2b4ac5f4-3133-4ae4-b836-0ce5fdc80192
	I1025 17:46:15.080414   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.080431   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.080445   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.080450   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.080455   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.080507   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-188000","namespace":"kube-system","uid":"ac7541cf-a304-4933-acea-37c4f53f6710","resourceVersion":"398","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.mirror":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270310Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5147 chars]
	I1025 17:46:15.080758   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.080765   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.080773   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.080779   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.083185   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.083194   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.083199   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.083204   66547 round_trippers.go:580]     Audit-Id: 152f905b-e7d4-409d-beab-daf3a108a2b2
	I1025 17:46:15.083209   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.083218   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.083223   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.083228   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.083284   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.254423   66547 request.go:629] Waited for 170.893265ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000
	I1025 17:46:15.254487   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000
	I1025 17:46:15.254497   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.254508   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.254519   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.259058   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:15.259074   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.259080   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.259084   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.259089   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.259093   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.259098   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.259103   66547 round_trippers.go:580]     Audit-Id: 4da5ad3d-18de-4358-a529-387ddf94b3f3
	I1025 17:46:15.259178   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-188000","namespace":"kube-system","uid":"ac7541cf-a304-4933-acea-37c4f53f6710","resourceVersion":"398","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.mirror":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270310Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5147 chars]
	I1025 17:46:15.453865   66547 request.go:629] Waited for 194.435263ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.453956   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.453969   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.453979   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.453987   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.457506   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:15.457520   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.457532   66547 round_trippers.go:580]     Audit-Id: 2a313e14-6655-461e-b539-60e84dd16088
	I1025 17:46:15.457537   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.457542   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.457546   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.457551   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.457556   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.457615   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.960076   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000
	I1025 17:46:15.960098   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.960109   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.960119   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.964346   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:15.964355   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.964361   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.964365   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.964370   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.964375   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.964379   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.964388   66547 round_trippers.go:580]     Audit-Id: 02705c23-9c69-4515-b19d-610797ec5736
	I1025 17:46:15.964467   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-188000","namespace":"kube-system","uid":"ac7541cf-a304-4933-acea-37c4f53f6710","resourceVersion":"398","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.mirror":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270310Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5147 chars]
	I1025 17:46:15.964698   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.964705   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.964710   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.964717   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.967202   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.967212   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.967217   66547 round_trippers.go:580]     Audit-Id: 511daa23-5a0a-4beb-9b76-08ade497efb7
	I1025 17:46:15.967223   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.967228   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.967232   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.967238   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.967243   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.967452   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:16.458028   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000
	I1025 17:46:16.458040   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:16.458047   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:16.458054   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:16.460857   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:16.460869   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:16.460875   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:16 GMT
	I1025 17:46:16.460880   66547 round_trippers.go:580]     Audit-Id: 7f3abe07-377a-4f8a-9316-2ef068c87158
	I1025 17:46:16.460884   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:16.460889   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:16.460894   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:16.460898   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:16.460954   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-188000","namespace":"kube-system","uid":"ac7541cf-a304-4933-acea-37c4f53f6710","resourceVersion":"474","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.mirror":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270310Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4903 chars]
	I1025 17:46:16.461192   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:16.461200   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:16.461208   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:16.461214   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:16.463625   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:16.463635   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:16.463640   66547 round_trippers.go:580]     Audit-Id: 2617ab88-28c3-4631-a4b7-6e8f25540de6
	I1025 17:46:16.463645   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:16.463651   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:16.463655   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:16.463664   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:16.463674   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:16 GMT
	I1025 17:46:16.463731   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:16.463906   66547 pod_ready.go:92] pod "kube-scheduler-functional-188000" in "kube-system" namespace has status "Ready":"True"
	I1025 17:46:16.463915   66547 pod_ready.go:81] duration metric: took 1.385799554s waiting for pod "kube-scheduler-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:16.463926   66547 pod_ready.go:38] duration metric: took 14.438569915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 17:46:16.463940   66547 api_server.go:52] waiting for apiserver process to appear ...
	I1025 17:46:16.463993   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 17:46:16.475108   66547 command_runner.go:130] > 5688
	I1025 17:46:16.475797   66547 api_server.go:72] duration metric: took 14.632584237s to wait for apiserver process to appear ...
	I1025 17:46:16.475805   66547 api_server.go:88] waiting for apiserver healthz status ...
	I1025 17:46:16.475815   66547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56239/healthz ...
	I1025 17:46:16.481676   66547 api_server.go:279] https://127.0.0.1:56239/healthz returned 200:
	ok
	I1025 17:46:16.481717   66547 round_trippers.go:463] GET https://127.0.0.1:56239/version
	I1025 17:46:16.481723   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:16.481729   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:16.481736   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:16.483200   66547 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 17:46:16.483209   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:16.483215   66547 round_trippers.go:580]     Content-Length: 264
	I1025 17:46:16.483248   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:16 GMT
	I1025 17:46:16.483254   66547 round_trippers.go:580]     Audit-Id: 851d77ee-df52-4077-8ba6-cfdbb13cf5d2
	I1025 17:46:16.483265   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:16.483270   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:16.483274   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:16.483280   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:16.483296   66547 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1025 17:46:16.483327   66547 api_server.go:141] control plane version: v1.28.3
	I1025 17:46:16.483334   66547 api_server.go:131] duration metric: took 7.524769ms to wait for apiserver health ...
	I1025 17:46:16.483339   66547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 17:46:16.483372   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods
	I1025 17:46:16.483376   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:16.483381   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:16.483389   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:16.486264   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:16.486275   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:16.486281   66547 round_trippers.go:580]     Audit-Id: cb0a122d-d77e-4443-a0f7-e7365750745f
	I1025 17:46:16.486285   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:16.486289   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:16.486295   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:16.486302   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:16.486310   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:16 GMT
	I1025 17:46:16.487576   66547 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"474"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ff5ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7022509e-429b-40a1-95e2-ac3b980b2b1e","resourceVersion":"395","creationTimestamp":"2023-10-26T00:45:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ef2c2cc4-097f-444f-b52c-dfc3304565b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef2c2cc4-097f-444f-b52c-dfc3304565b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49690 chars]
	I1025 17:46:16.488734   66547 system_pods.go:59] 7 kube-system pods found
	I1025 17:46:16.488744   66547 system_pods.go:61] "coredns-5dd5756b68-ff5ll" [7022509e-429b-40a1-95e2-ac3b980b2b1e] Running
	I1025 17:46:16.488748   66547 system_pods.go:61] "etcd-functional-188000" [095a6b2c-e973-4dad-9409-01e79c7e3021] Running
	I1025 17:46:16.488752   66547 system_pods.go:61] "kube-apiserver-functional-188000" [6811c037-9ba7-49b2-9dc8-e7c835a205ee] Running
	I1025 17:46:16.488756   66547 system_pods.go:61] "kube-controller-manager-functional-188000" [000afba9-c176-4b7f-9674-24c20b7b1e92] Running
	I1025 17:46:16.488764   66547 system_pods.go:61] "kube-proxy-bnvpn" [35c2ae14-426f-4a44-b88e-d3d88befe16f] Running
	I1025 17:46:16.488769   66547 system_pods.go:61] "kube-scheduler-functional-188000" [ac7541cf-a304-4933-acea-37c4f53f6710] Running
	I1025 17:46:16.488773   66547 system_pods.go:61] "storage-provisioner" [6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f] Running
	I1025 17:46:16.488776   66547 system_pods.go:74] duration metric: took 5.432289ms to wait for pod list to return data ...
	I1025 17:46:16.488782   66547 default_sa.go:34] waiting for default service account to be created ...
	I1025 17:46:16.655930   66547 request.go:629] Waited for 167.068857ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56239/api/v1/namespaces/default/serviceaccounts
	I1025 17:46:16.656009   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/default/serviceaccounts
	I1025 17:46:16.656016   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:16.656024   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:16.656032   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:16.659126   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:16.659137   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:16.659143   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:16.659148   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:16.659154   66547 round_trippers.go:580]     Content-Length: 261
	I1025 17:46:16.659158   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:16 GMT
	I1025 17:46:16.659162   66547 round_trippers.go:580]     Audit-Id: c0126d31-0283-48fd-981d-f5b6d435fd2a
	I1025 17:46:16.659167   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:16.659174   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:16.659186   66547 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"474"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d159d953-5483-49f6-8b51-7f76441cc765","resourceVersion":"289","creationTimestamp":"2023-10-26T00:45:31Z"}}]}
	I1025 17:46:16.659314   66547 default_sa.go:45] found service account: "default"
	I1025 17:46:16.659322   66547 default_sa.go:55] duration metric: took 170.531208ms for default service account to be created ...
	I1025 17:46:16.659329   66547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 17:46:16.854498   66547 request.go:629] Waited for 195.005453ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods
	I1025 17:46:16.854571   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods
	I1025 17:46:16.854582   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:16.854593   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:16.854605   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:16.859796   66547 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 17:46:16.859809   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:16.859815   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:16.859819   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:16.859824   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:16.859828   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:16.859833   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:16 GMT
	I1025 17:46:16.859842   66547 round_trippers.go:580]     Audit-Id: 5d5f778f-c41f-4ae0-a9c4-f39a601188a6
	I1025 17:46:16.860185   66547 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"474"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ff5ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7022509e-429b-40a1-95e2-ac3b980b2b1e","resourceVersion":"395","creationTimestamp":"2023-10-26T00:45:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ef2c2cc4-097f-444f-b52c-dfc3304565b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef2c2cc4-097f-444f-b52c-dfc3304565b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49690 chars]
	I1025 17:46:16.861321   66547 system_pods.go:86] 7 kube-system pods found
	I1025 17:46:16.861330   66547 system_pods.go:89] "coredns-5dd5756b68-ff5ll" [7022509e-429b-40a1-95e2-ac3b980b2b1e] Running
	I1025 17:46:16.861334   66547 system_pods.go:89] "etcd-functional-188000" [095a6b2c-e973-4dad-9409-01e79c7e3021] Running
	I1025 17:46:16.861338   66547 system_pods.go:89] "kube-apiserver-functional-188000" [6811c037-9ba7-49b2-9dc8-e7c835a205ee] Running
	I1025 17:46:16.861342   66547 system_pods.go:89] "kube-controller-manager-functional-188000" [000afba9-c176-4b7f-9674-24c20b7b1e92] Running
	I1025 17:46:16.861345   66547 system_pods.go:89] "kube-proxy-bnvpn" [35c2ae14-426f-4a44-b88e-d3d88befe16f] Running
	I1025 17:46:16.861350   66547 system_pods.go:89] "kube-scheduler-functional-188000" [ac7541cf-a304-4933-acea-37c4f53f6710] Running
	I1025 17:46:16.861353   66547 system_pods.go:89] "storage-provisioner" [6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f] Running
	I1025 17:46:16.861358   66547 system_pods.go:126] duration metric: took 202.018505ms to wait for k8s-apps to be running ...
	I1025 17:46:16.861363   66547 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 17:46:16.861414   66547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 17:46:16.873132   66547 system_svc.go:56] duration metric: took 11.763743ms WaitForService to wait for kubelet.
	I1025 17:46:16.873146   66547 kubeadm.go:581] duration metric: took 15.02992274s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 17:46:16.873158   66547 node_conditions.go:102] verifying NodePressure condition ...
	I1025 17:46:17.054055   66547 request.go:629] Waited for 180.8478ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56239/api/v1/nodes
	I1025 17:46:17.054105   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes
	I1025 17:46:17.054139   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:17.054273   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:17.054292   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:17.058336   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:17.058347   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:17.058353   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:17.058357   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:17 GMT
	I1025 17:46:17.058362   66547 round_trippers.go:580]     Audit-Id: a5713e87-40d7-43b6-9c61-1bb63a4a2784
	I1025 17:46:17.058374   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:17.058379   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:17.058384   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:17.058441   66547 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"474"},"items":[{"metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4844 chars]
	I1025 17:46:17.058663   66547 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 17:46:17.058675   66547 node_conditions.go:123] node cpu capacity is 12
	I1025 17:46:17.058685   66547 node_conditions.go:105] duration metric: took 185.517472ms to run NodePressure ...
	I1025 17:46:17.058692   66547 start.go:228] waiting for startup goroutines ...
	I1025 17:46:17.058697   66547 start.go:233] waiting for cluster config update ...
	I1025 17:46:17.058708   66547 start.go:242] writing updated cluster config ...
	I1025 17:46:17.058999   66547 ssh_runner.go:195] Run: rm -f paused
	I1025 17:46:17.098129   66547 start.go:600] kubectl: 1.27.2, cluster: 1.28.3 (minor skew: 1)
	I1025 17:46:17.131085   66547 out.go:177] * Done! kubectl is now configured to use "functional-188000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Oct 26 00:45:51 functional-188000 cri-dockerd[4689]: time="2023-10-26T00:45:51Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 26 00:45:51 functional-188000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Oct 26 00:45:51 functional-188000 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Oct 26 00:45:51 functional-188000 systemd[1]: cri-docker.service: Deactivated successfully.
	Oct 26 00:45:51 functional-188000 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Oct 26 00:45:51 functional-188000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Start docker client with request timeout 0s"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Loaded network plugin cni"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Docker cri networking managed by network plugin cni"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Docker Info: &{ID:68cb55e9-3c82-4216-a56e-ae91fcc0c943 Containers:14 ContainersRunning:0 ContainersPaused:0 ContainersStopped:14 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:35 SystemTime:2023-10-26T00:45:51.538922738Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 KernelVersion:6.4.16-linuxkit OperatingSystem:
Ubuntu 22.04.3 LTS OSVersion:22.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0000ce230 NCPU:12 MemTotal:6227828736 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:functional-188000 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense
: DefaultAddressPools:[] Warnings:[]}"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Setting cgroupDriver cgroupfs"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 26 00:45:51 functional-188000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5f18f6726713c225b033534e3b5f28ba842f91579806ad1d533a77c48a35cc20/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/73f778a7c4041980a802143f23147c72daead76bace354ee12338d2664f533ad/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4a4bc70f7327ec61234ddaf949266c43749e9aa7244880110cbb75b815a88b9f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/52d6a2732be9d158703e4d9b2adc05c58188b6e0fe375bc1332711e2a6aa9ba5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9dfe51b5073325f5ba2cc1b45fd812a87d8fba60716c34dee564ee01c3d53a02/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/559b9a278dba392d83f546119ba1fbdb9d79aa4041d57c4d2c3a5243195064d8/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c29298a9a01a00b04a2372723fc93bcf9a28f2909c24e0e2f2a8fdbbd36d2c8d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 dockerd[4476]: time="2023-10-26T00:45:57.845413519Z" level=info msg="ignoring event" container=556e0913a4194f07cf0a6c6a9b4ddec0df633530cf7e3a9870b1a67c0a39079f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c500b713ece17       6e38f40d628db       13 seconds ago       Running             storage-provisioner       2                   73f778a7c4041       storage-provisioner
	c9aa983994347       ead0a4a53df89       31 seconds ago       Running             coredns                   1                   c29298a9a01a0       coredns-5dd5756b68-ff5ll
	97bbb1430ec1f       bfc896cf80fba       31 seconds ago       Running             kube-proxy                1                   559b9a278dba3       kube-proxy-bnvpn
	c51c8d65b5703       10baa1ca17068       31 seconds ago       Running             kube-controller-manager   1                   9dfe51b507332       kube-controller-manager-functional-188000
	de0914a73beb4       6d1b4fd1b182d       31 seconds ago       Running             kube-scheduler            1                   52d6a2732be9d       kube-scheduler-functional-188000
	0b21c9816561f       5374347291230       31 seconds ago       Running             kube-apiserver            1                   4a4bc70f7327e       kube-apiserver-functional-188000
	556e0913a4194       6e38f40d628db       31 seconds ago       Exited              storage-provisioner       1                   73f778a7c4041       storage-provisioner
	3c8adea9036e4       73deb9a3f7025       31 seconds ago       Running             etcd                      1                   5f18f6726713c       etcd-functional-188000
	af12fd91d5bf2       ead0a4a53df89       54 seconds ago       Exited              coredns                   0                   8422dfd027437       coredns-5dd5756b68-ff5ll
	a43456fe6b21c       bfc896cf80fba       55 seconds ago       Exited              kube-proxy                0                   5e341bbd6ea5e       kube-proxy-bnvpn
	274ded1e50f28       6d1b4fd1b182d       About a minute ago   Exited              kube-scheduler            0                   43612e5ea242d       kube-scheduler-functional-188000
	3e2bf17527f5a       10baa1ca17068       About a minute ago   Exited              kube-controller-manager   0                   118d5ec425047       kube-controller-manager-functional-188000
	f6135c3690fc6       5374347291230       About a minute ago   Exited              kube-apiserver            0                   f4161cfa72653       kube-apiserver-functional-188000
	0f623e5f8d417       73deb9a3f7025       About a minute ago   Exited              etcd                      0                   11c2daa128776       etcd-functional-188000
	
	* 
	* ==> coredns [af12fd91d5bf] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [c9aa98399434] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56233 - 60901 "HINFO IN 9099018579167008431.5841343290163082259. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.066366848s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-188000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-188000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=functional-188000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T17_45_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Oct 2023 00:45:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-188000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 26 Oct 2023 00:46:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Oct 2023 00:46:20 +0000   Thu, 26 Oct 2023 00:45:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Oct 2023 00:46:20 +0000   Thu, 26 Oct 2023 00:45:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Oct 2023 00:46:20 +0000   Thu, 26 Oct 2023 00:45:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Oct 2023 00:46:20 +0000   Thu, 26 Oct 2023 00:45:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-188000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7fe4125713c4e90ad2ec45d2a9bca5f
	  System UUID:                d7fe4125713c4e90ad2ec45d2a9bca5f
	  Boot ID:                    97028b5e-c1fe-46d5-abb1-881a12fedf72
	  Kernel Version:             6.4.16-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-ff5ll                     100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     57s
	  kube-system                 etcd-functional-188000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         70s
	  kube-system                 kube-apiserver-functional-188000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-functional-188000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-bnvpn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-scheduler-functional-188000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (6%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 55s   kube-proxy       
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 70s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s   kubelet          Node functional-188000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s   kubelet          Node functional-188000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s   kubelet          Node functional-188000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           58s   node-controller  Node functional-188000 event: Registered Node functional-188000 in Controller
	  Normal  RegisteredNode           16s   node-controller  Node functional-188000 event: Registered Node functional-188000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.002920] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:07.0: PCI INT A: no GSI
	[  +0.002075] virtio-pci 0000:00:08.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:08.0: PCI INT A: no GSI
	[  +0.004650] virtio-pci 0000:00:09.0: can't derive routing for PCI INT A
	[  +0.000002] virtio-pci 0000:00:09.0: PCI INT A: no GSI
	[  +0.005011] virtio-pci 0000:00:0a.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0a.0: PCI INT A: no GSI
	[  +0.001909] virtio-pci 0000:00:0b.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0b.0: PCI INT A: no GSI
	[  +0.005014] virtio-pci 0000:00:0c.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0c.0: PCI INT A: no GSI
	[  +0.000255] virtio-pci 0000:00:0d.0: can't derive routing for PCI INT A
	[  +0.000000] virtio-pci 0000:00:0d.0: PCI INT A: no GSI
	[  +0.003210] virtio-pci 0000:00:0e.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0e.0: PCI INT A: no GSI
	[  +0.007936] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
	[  +0.025214] lpc_ich 0000:00:1f.0: No MFD cells added
	[  +0.006812] fail to initialize ptp_kvm
	[  +0.000001] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.756658] netlink: 'rc.init': attribute type 22 has an invalid length.
	[  +0.007092] 3[378]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.199399] FAT-fs (loop0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
	[  +0.000376] FAT-fs (loop0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
	[  +0.016213] grpcfuse: loading out-of-tree module taints kernel.
	
	* 
	* ==> etcd [0f623e5f8d41] <==
	* {"level":"info","ts":"2023-10-26T00:45:15.346577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:15.346583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:15.346589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:15.346594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:15.347535Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:45:15.348204Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-188000 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-26T00:45:15.348248Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T00:45:15.348414Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:45:15.348478Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:45:15.348647Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:45:15.34827Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T00:45:15.348705Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-26T00:45:15.348734Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-26T00:45:15.349529Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-26T00:45:15.349756Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-26T00:45:40.226127Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-26T00:45:40.226204Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-188000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2023-10-26T00:45:40.226349Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-26T00:45:40.226537Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-26T00:45:40.237057Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-26T00:45:40.237132Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-26T00:45:40.237178Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2023-10-26T00:45:40.25251Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-26T00:45:40.252602Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-26T00:45:40.252609Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-188000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [3c8adea9036e] <==
	* {"level":"info","ts":"2023-10-26T00:45:57.823698Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-26T00:45:57.823719Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-26T00:45:57.824037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-10-26T00:45:57.824131Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-10-26T00:45:57.824371Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:45:57.824437Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:45:57.831352Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-26T00:45:57.831434Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-26T00:45:57.830729Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-26T00:45:57.832001Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-26T00:45:57.832059Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-26T00:45:59.235146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:59.235262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:59.235312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:59.235466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2023-10-26T00:45:59.235493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-10-26T00:45:59.235508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2023-10-26T00:45:59.23552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-10-26T00:45:59.23711Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-188000 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-26T00:45:59.237176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T00:45:59.237422Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T00:45:59.238048Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-26T00:45:59.238657Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-26T00:45:59.238926Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-26T00:45:59.23897Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  00:46:29 up 9 min,  0 users,  load average: 0.31, 0.41, 0.22
	Linux functional-188000 6.4.16-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Oct 10 20:42:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [0b21c9816561] <==
	* I1026 00:46:00.348070       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I1026 00:46:00.347870       1 controller.go:116] Starting legacy_token_tracking_controller
	I1026 00:46:00.348097       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I1026 00:46:00.348157       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I1026 00:46:00.348201       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1026 00:46:00.348372       1 aggregator.go:164] waiting for initial CRD sync...
	I1026 00:46:00.348683       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1026 00:46:00.348842       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1026 00:46:00.348118       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1026 00:46:00.523071       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 00:46:00.523085       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 00:46:00.523095       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 00:46:00.523102       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1026 00:46:00.523243       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 00:46:00.523292       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1026 00:46:00.523313       1 aggregator.go:166] initial CRD sync complete...
	I1026 00:46:00.523319       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 00:46:00.523324       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 00:46:00.523329       1 cache.go:39] Caches are synced for autoregister controller
	I1026 00:46:00.523617       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1026 00:46:00.523893       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 00:46:00.528536       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 00:46:01.351769       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 00:46:13.389432       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 00:46:13.438940       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [f6135c3690fc] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 00:45:50.197334       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 00:45:50.225475       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 00:45:50.228317       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [3e2bf17527f5] <==
	* I1026 00:45:31.831943       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1026 00:45:31.840012       1 shared_informer.go:318] Caches are synced for endpoint
	I1026 00:45:31.877400       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 00:45:31.880753       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1026 00:45:31.885421       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 00:45:31.904126       1 shared_informer.go:318] Caches are synced for persistent volume
	I1026 00:45:32.152779       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1026 00:45:32.231876       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1026 00:45:32.386707       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 00:45:32.386800       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 00:45:32.401582       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 00:45:32.627409       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bnvpn"
	I1026 00:45:32.826622       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-7kd6b"
	I1026 00:45:32.831631       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-ff5ll"
	I1026 00:45:32.846087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="692.840701ms"
	I1026 00:45:32.852950       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-7kd6b"
	I1026 00:45:32.930715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.594365ms"
	I1026 00:45:32.937151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.368905ms"
	I1026 00:45:32.937253       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.642µs"
	I1026 00:45:34.959284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.838µs"
	I1026 00:45:34.967834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.283µs"
	I1026 00:45:34.972519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.729µs"
	I1026 00:45:34.975513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="169.966µs"
	I1026 00:45:34.988537       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.407348ms"
	I1026 00:45:34.988642       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.094µs"
	
	* 
	* ==> kube-controller-manager [c51c8d65b570] <==
	* I1026 00:46:13.337661       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1026 00:46:13.337768       1 shared_informer.go:318] Caches are synced for expand
	I1026 00:46:13.338324       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1026 00:46:13.338391       1 shared_informer.go:318] Caches are synced for endpoint
	I1026 00:46:13.338354       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1026 00:46:13.338920       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1026 00:46:13.338937       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1026 00:46:13.346929       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1026 00:46:13.354418       1 shared_informer.go:318] Caches are synced for disruption
	I1026 00:46:13.356116       1 shared_informer.go:318] Caches are synced for taint
	I1026 00:46:13.356431       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1026 00:46:13.356565       1 taint_manager.go:211] "Sending events to api server"
	I1026 00:46:13.356582       1 event.go:307] "Event occurred" object="functional-188000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-188000 event: Registered Node functional-188000 in Controller"
	I1026 00:46:13.356461       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1026 00:46:13.356691       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-188000"
	I1026 00:46:13.356829       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1026 00:46:13.389380       1 shared_informer.go:318] Caches are synced for stateful set
	I1026 00:46:13.394395       1 shared_informer.go:318] Caches are synced for HPA
	I1026 00:46:13.440027       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 00:46:13.468783       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 00:46:13.482770       1 shared_informer.go:318] Caches are synced for namespace
	I1026 00:46:13.487654       1 shared_informer.go:318] Caches are synced for service account
	I1026 00:46:13.852918       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 00:46:13.888067       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 00:46:13.888111       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [97bbb1430ec1] <==
	* I1026 00:45:58.046567       1 server_others.go:69] "Using iptables proxy"
	E1026 00:45:58.124171       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-188000": dial tcp 192.168.49.2:8441: connect: connection refused
	I1026 00:46:00.523152       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1026 00:46:00.634467       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 00:46:00.637637       1 server_others.go:152] "Using iptables Proxier"
	I1026 00:46:00.637707       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 00:46:00.637714       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 00:46:00.637736       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 00:46:00.638196       1 server.go:846] "Version info" version="v1.28.3"
	I1026 00:46:00.638351       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:46:00.639088       1 config.go:188] "Starting service config controller"
	I1026 00:46:00.639120       1 config.go:315] "Starting node config controller"
	I1026 00:46:00.639132       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 00:46:00.639133       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 00:46:00.640110       1 config.go:97] "Starting endpoint slice config controller"
	I1026 00:46:00.640179       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 00:46:00.740159       1 shared_informer.go:318] Caches are synced for node config
	I1026 00:46:00.740204       1 shared_informer.go:318] Caches are synced for service config
	I1026 00:46:00.741388       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [a43456fe6b21] <==
	* I1026 00:45:33.930364       1 server_others.go:69] "Using iptables proxy"
	I1026 00:45:33.942189       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1026 00:45:34.038701       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 00:45:34.041189       1 server_others.go:152] "Using iptables Proxier"
	I1026 00:45:34.041282       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 00:45:34.041292       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 00:45:34.041312       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 00:45:34.041753       1 server.go:846] "Version info" version="v1.28.3"
	I1026 00:45:34.041812       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:45:34.042730       1 config.go:97] "Starting endpoint slice config controller"
	I1026 00:45:34.042795       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 00:45:34.042824       1 config.go:188] "Starting service config controller"
	I1026 00:45:34.042841       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 00:45:34.047275       1 config.go:315] "Starting node config controller"
	I1026 00:45:34.047631       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 00:45:34.144114       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 00:45:34.145540       1 shared_informer.go:318] Caches are synced for service config
	I1026 00:45:34.148075       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [274ded1e50f2] <==
	* E1026 00:45:16.836752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1026 00:45:16.836224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 00:45:16.836764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1026 00:45:16.836303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 00:45:16.836786       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1026 00:45:16.836345       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1026 00:45:16.836837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1026 00:45:16.836397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 00:45:16.836853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1026 00:45:16.836434       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 00:45:16.836873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1026 00:45:16.837028       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1026 00:45:16.837063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1026 00:45:16.837087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1026 00:45:16.837204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1026 00:45:17.663006       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1026 00:45:17.663088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1026 00:45:17.799733       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1026 00:45:17.799778       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 00:45:17.801879       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 00:45:17.801926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1026 00:45:17.823058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1026 00:45:17.823115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1026 00:45:19.432586       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1026 00:45:40.189318       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [de0914a73beb] <==
	* I1026 00:45:58.548357       1 serving.go:348] Generated self-signed cert in-memory
	I1026 00:46:00.527074       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1026 00:46:00.527138       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:46:00.533885       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1026 00:46:00.533939       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1026 00:46:00.534005       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 00:46:00.534017       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 00:46:00.534029       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 00:46:00.534111       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1026 00:46:00.534420       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 00:46:00.534470       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 00:46:00.635108       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1026 00:46:00.635159       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 00:46:00.635125       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.934142    2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dfe51b5073325f5ba2cc1b45fd812a87d8fba60716c34dee564ee01c3d53a02"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.935331    2496 status_manager.go:853] "Failed to get status for pod" podUID="0f3f9f77e1fc8a12cf1621823498272c" pod="kube-system/kube-apiserver-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.935526    2496 status_manager.go:853] "Failed to get status for pod" podUID="35c2ae14-426f-4a44-b88e-d3d88befe16f" pod="kube-system/kube-proxy-bnvpn" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-bnvpn\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.935689    2496 status_manager.go:853] "Failed to get status for pod" podUID="7022509e-429b-40a1-95e2-ac3b980b2b1e" pod="kube-system/coredns-5dd5756b68-ff5ll" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ff5ll\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.935818    2496 status_manager.go:853] "Failed to get status for pod" podUID="6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.935937    2496 status_manager.go:853] "Failed to get status for pod" podUID="1a5cba45956bd26c7fcaab9a2058286e" pod="kube-system/kube-controller-manager-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.936061    2496 status_manager.go:853] "Failed to get status for pod" podUID="884ed00cd2aaa3b4f518197dc5a844ef" pod="kube-system/etcd-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.936173    2496 status_manager.go:853] "Failed to get status for pod" podUID="5b69b95f77dea85816490ff8f86d59b3" pod="kube-system/kube-scheduler-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.027715    2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a4bc70f7327ec61234ddaf949266c43749e9aa7244880110cbb75b815a88b9f"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.028555    2496 status_manager.go:853] "Failed to get status for pod" podUID="35c2ae14-426f-4a44-b88e-d3d88befe16f" pod="kube-system/kube-proxy-bnvpn" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-bnvpn\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.029326    2496 status_manager.go:853] "Failed to get status for pod" podUID="7022509e-429b-40a1-95e2-ac3b980b2b1e" pod="kube-system/coredns-5dd5756b68-ff5ll" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ff5ll\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.030672    2496 status_manager.go:853] "Failed to get status for pod" podUID="6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.031069    2496 status_manager.go:853] "Failed to get status for pod" podUID="1a5cba45956bd26c7fcaab9a2058286e" pod="kube-system/kube-controller-manager-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.031381    2496 status_manager.go:853] "Failed to get status for pod" podUID="884ed00cd2aaa3b4f518197dc5a844ef" pod="kube-system/etcd-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.031772    2496 status_manager.go:853] "Failed to get status for pod" podUID="5b69b95f77dea85816490ff8f86d59b3" pod="kube-system/kube-scheduler-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.031959    2496 status_manager.go:853] "Failed to get status for pod" podUID="0f3f9f77e1fc8a12cf1621823498272c" pod="kube-system/kube-apiserver-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.148811    2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c29298a9a01a00b04a2372723fc93bcf9a28f2909c24e0e2f2a8fdbbd36d2c8d"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.233984    2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="559b9a278dba392d83f546119ba1fbdb9d79aa4041d57c4d2c3a5243195064d8"
	Oct 26 00:45:59 functional-188000 kubelet[2496]: I1026 00:45:59.334479    2496 scope.go:117] "RemoveContainer" containerID="acd3650135af374f4320e0d6bcd857120933741c11ca50532f0fb03830938045"
	Oct 26 00:45:59 functional-188000 kubelet[2496]: I1026 00:45:59.334754    2496 scope.go:117] "RemoveContainer" containerID="556e0913a4194f07cf0a6c6a9b4ddec0df633530cf7e3a9870b1a67c0a39079f"
	Oct 26 00:45:59 functional-188000 kubelet[2496]: E1026 00:45:59.335043    2496 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f)\"" pod="kube-system/storage-provisioner" podUID="6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f"
	Oct 26 00:46:00 functional-188000 kubelet[2496]: E1026 00:46:00.364718    2496 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Oct 26 00:46:00 functional-188000 kubelet[2496]: I1026 00:46:00.530690    2496 scope.go:117] "RemoveContainer" containerID="556e0913a4194f07cf0a6c6a9b4ddec0df633530cf7e3a9870b1a67c0a39079f"
	Oct 26 00:46:00 functional-188000 kubelet[2496]: E1026 00:46:00.531052    2496 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f)\"" pod="kube-system/storage-provisioner" podUID="6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f"
	Oct 26 00:46:15 functional-188000 kubelet[2496]: I1026 00:46:15.435655    2496 scope.go:117] "RemoveContainer" containerID="556e0913a4194f07cf0a6c6a9b4ddec0df633530cf7e3a9870b1a67c0a39079f"
	
	* 
	* ==> storage-provisioner [556e0913a419] <==
	* I1026 00:45:57.743158       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 00:45:57.745843       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [c500b713ece1] <==
	* I1026 00:46:15.508682       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 00:46:15.531510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 00:46:15.531595       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-188000 -n functional-188000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-188000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (4.05s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (5.05s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-188000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-188000 get pods: exit status 1 (793.297767ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-188000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-188000
helpers_test.go:235: (dbg) docker inspect functional-188000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a0c213dc3ac2433b9ac003938903e568bd3d28dbde6fefb7f904b5a6a1df3bfb",
	        "Created": "2023-10-26T00:45:03.536217576Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29864,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T00:45:03.759078925Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/a0c213dc3ac2433b9ac003938903e568bd3d28dbde6fefb7f904b5a6a1df3bfb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a0c213dc3ac2433b9ac003938903e568bd3d28dbde6fefb7f904b5a6a1df3bfb/hostname",
	        "HostsPath": "/var/lib/docker/containers/a0c213dc3ac2433b9ac003938903e568bd3d28dbde6fefb7f904b5a6a1df3bfb/hosts",
	        "LogPath": "/var/lib/docker/containers/a0c213dc3ac2433b9ac003938903e568bd3d28dbde6fefb7f904b5a6a1df3bfb/a0c213dc3ac2433b9ac003938903e568bd3d28dbde6fefb7f904b5a6a1df3bfb-json.log",
	        "Name": "/functional-188000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-188000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-188000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c353ed8215fac2e882031b322e1aef62fbefadc60c1c795e5167fdca1713513b-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c353ed8215fac2e882031b322e1aef62fbefadc60c1c795e5167fdca1713513b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c353ed8215fac2e882031b322e1aef62fbefadc60c1c795e5167fdca1713513b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c353ed8215fac2e882031b322e1aef62fbefadc60c1c795e5167fdca1713513b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-188000",
	                "Source": "/var/lib/docker/volumes/functional-188000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-188000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-188000",
	                "name.minikube.sigs.k8s.io": "functional-188000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f41c0f00f47c85ccab259f3c9185c3fd8f888b614d21172aa6d7b42253a9d297",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56240"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56241"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56242"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56238"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56239"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f41c0f00f47c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-188000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a0c213dc3ac2",
	                        "functional-188000"
	                    ],
	                    "NetworkID": "9c6584acd3f5f010c10228aadf5881262279d8de66e3b7ef13f7639377f1b7ba",
	                    "EndpointID": "0df72d627619a30b77bdd3ae45493e740e172f3b4f0d497c5be486ae53327208",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-188000 -n functional-188000
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 logs -n 25: (3.225704525s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                              Args                              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| pause   | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | pause                                                          |                   |         |         |                     |                     |
	| unpause | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| unpause | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| unpause | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | unpause                                                        |                   |         |         |                     |                     |
	| stop    | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| stop    | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| stop    | nospam-797000 --log_dir                                        | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 |                   |         |         |                     |                     |
	|         | stop                                                           |                   |         |         |                     |                     |
	| delete  | -p nospam-797000                                               | nospam-797000     | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:44 PDT |
	| start   | -p functional-188000                                           | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:44 PDT | 25 Oct 23 17:45 PDT |
	|         | --memory=4000                                                  |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                          |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                     |                   |         |         |                     |                     |
	| start   | -p functional-188000                                           | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:45 PDT | 25 Oct 23 17:46 PDT |
	|         | --alsologtostderr -v=8                                         |                   |         |         |                     |                     |
	| cache   | functional-188000 cache add                                    | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | registry.k8s.io/pause:3.1                                      |                   |         |         |                     |                     |
	| cache   | functional-188000 cache add                                    | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | registry.k8s.io/pause:3.3                                      |                   |         |         |                     |                     |
	| cache   | functional-188000 cache add                                    | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | functional-188000 cache add                                    | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | minikube-local-cache-test:functional-188000                    |                   |         |         |                     |                     |
	| cache   | functional-188000 cache delete                                 | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | minikube-local-cache-test:functional-188000                    |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | registry.k8s.io/pause:3.3                                      |                   |         |         |                     |                     |
	| cache   | list                                                           | minikube          | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	| ssh     | functional-188000 ssh sudo                                     | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | crictl images                                                  |                   |         |         |                     |                     |
	| ssh     | functional-188000                                              | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | ssh sudo docker rmi                                            |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| ssh     | functional-188000 ssh                                          | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT |                     |
	|         | sudo crictl inspecti                                           |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | functional-188000 cache reload                                 | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	| ssh     | functional-188000 ssh                                          | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | sudo crictl inspecti                                           |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | registry.k8s.io/pause:3.1                                      |                   |         |         |                     |                     |
	| cache   | delete                                                         | minikube          | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT | 25 Oct 23 17:46 PDT |
	|         | registry.k8s.io/pause:latest                                   |                   |         |         |                     |                     |
	| kubectl | functional-188000 kubectl --                                   | functional-188000 | jenkins | v1.31.2 | 25 Oct 23 17:46 PDT |                     |
	|         | --context functional-188000                                    |                   |         |         |                     |                     |
	|         | get pods                                                       |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 17:45:37
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 17:45:37.083097   66547 out.go:296] Setting OutFile to fd 1 ...
	I1025 17:45:37.083397   66547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:45:37.083403   66547 out.go:309] Setting ErrFile to fd 2...
	I1025 17:45:37.083407   66547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:45:37.083612   66547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 17:45:37.085071   66547 out.go:303] Setting JSON to false
	I1025 17:45:37.106937   66547 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":31505,"bootTime":1698249632,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 17:45:37.107078   66547 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 17:45:37.128825   66547 out.go:177] * [functional-188000] minikube v1.31.2 on Darwin 14.0
	I1025 17:45:37.172354   66547 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 17:45:37.172489   66547 notify.go:220] Checking for updates...
	I1025 17:45:37.216499   66547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:45:37.238233   66547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 17:45:37.259405   66547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:45:37.280433   66547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 17:45:37.301239   66547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 17:45:37.323070   66547 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 17:45:37.323230   66547 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 17:45:37.380875   66547 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 17:45:37.381029   66547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:45:37.486222   66547 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2023-10-26 00:45:37.475652597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:45:37.528136   66547 out.go:177] * Using the docker driver based on existing profile
	I1025 17:45:37.549294   66547 start.go:298] selected driver: docker
	I1025 17:45:37.549311   66547 start.go:902] validating driver "docker" against &{Name:functional-188000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-188000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:45:37.549389   66547 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 17:45:37.549530   66547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:45:37.654871   66547 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2023-10-26 00:45:37.643015238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:45:37.658120   66547 cni.go:84] Creating CNI manager for ""
	I1025 17:45:37.658148   66547 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 17:45:37.658164   66547 start_flags.go:323] config:
	{Name:functional-188000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-188000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:45:37.701410   66547 out.go:177] * Starting control plane node functional-188000 in cluster functional-188000
	I1025 17:45:37.722572   66547 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 17:45:37.744136   66547 out.go:177] * Pulling base image ...
	I1025 17:45:37.786325   66547 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 17:45:37.786376   66547 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 17:45:37.786391   66547 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 17:45:37.786409   66547 cache.go:56] Caching tarball of preloaded images
	I1025 17:45:37.786592   66547 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 17:45:37.786614   66547 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 17:45:37.786761   66547 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/config.json ...
	I1025 17:45:37.838933   66547 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 17:45:37.838964   66547 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 17:45:37.838985   66547 cache.go:194] Successfully downloaded all kic artifacts
	I1025 17:45:37.839031   66547 start.go:365] acquiring machines lock for functional-188000: {Name:mk049bc040d714cb261ebd3cb2ab3e83ad65175f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 17:45:37.839111   66547 start.go:369] acquired machines lock for "functional-188000" in 60.988µs
	I1025 17:45:37.839133   66547 start.go:96] Skipping create...Using existing machine configuration
	I1025 17:45:37.839143   66547 fix.go:54] fixHost starting: 
	I1025 17:45:37.839392   66547 cli_runner.go:164] Run: docker container inspect functional-188000 --format={{.State.Status}}
	I1025 17:45:37.890210   66547 fix.go:102] recreateIfNeeded on functional-188000: state=Running err=<nil>
	W1025 17:45:37.890241   66547 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 17:45:37.933685   66547 out.go:177] * Updating the running docker "functional-188000" container ...
	I1025 17:45:37.954834   66547 machine.go:88] provisioning docker machine ...
	I1025 17:45:37.954890   66547 ubuntu.go:169] provisioning hostname "functional-188000"
	I1025 17:45:37.955095   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:38.007234   66547 main.go:141] libmachine: Using SSH client type: native
	I1025 17:45:38.007576   66547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56240 <nil> <nil>}
	I1025 17:45:38.007590   66547 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-188000 && echo "functional-188000" | sudo tee /etc/hostname
	I1025 17:45:38.139776   66547 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-188000
	
	I1025 17:45:38.139871   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:38.191354   66547 main.go:141] libmachine: Using SSH client type: native
	I1025 17:45:38.191648   66547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56240 <nil> <nil>}
	I1025 17:45:38.191662   66547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-188000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-188000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-188000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 17:45:38.313799   66547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 17:45:38.313820   66547 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 17:45:38.313844   66547 ubuntu.go:177] setting up certificates
	I1025 17:45:38.313855   66547 provision.go:83] configureAuth start
	I1025 17:45:38.313936   66547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-188000
	I1025 17:45:38.364776   66547 provision.go:138] copyHostCerts
	I1025 17:45:38.364836   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 17:45:38.364892   66547 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 17:45:38.364902   66547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 17:45:38.365008   66547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 17:45:38.365211   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 17:45:38.365238   66547 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 17:45:38.365242   66547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 17:45:38.365307   66547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 17:45:38.365467   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 17:45:38.365509   66547 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 17:45:38.365513   66547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 17:45:38.365571   66547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 17:45:38.365709   66547 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.functional-188000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-188000]
	I1025 17:45:38.525621   66547 provision.go:172] copyRemoteCerts
	I1025 17:45:38.525682   66547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 17:45:38.525747   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:38.577340   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:45:38.665474   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 17:45:38.665544   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 17:45:38.687976   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 17:45:38.688036   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1025 17:45:38.710086   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 17:45:38.710170   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 17:45:38.733390   66547 provision.go:86] duration metric: configureAuth took 419.508117ms
	I1025 17:45:38.733404   66547 ubuntu.go:193] setting minikube options for container-runtime
	I1025 17:45:38.733544   66547 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 17:45:38.733620   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:38.785970   66547 main.go:141] libmachine: Using SSH client type: native
	I1025 17:45:38.786249   66547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56240 <nil> <nil>}
	I1025 17:45:38.786259   66547 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 17:45:38.909272   66547 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 17:45:38.909285   66547 ubuntu.go:71] root file system type: overlay
	I1025 17:45:38.909388   66547 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 17:45:38.909477   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:38.960504   66547 main.go:141] libmachine: Using SSH client type: native
	I1025 17:45:38.960822   66547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56240 <nil> <nil>}
	I1025 17:45:38.960875   66547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 17:45:39.094927   66547 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 17:45:39.095034   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:39.146810   66547 main.go:141] libmachine: Using SSH client type: native
	I1025 17:45:39.147098   66547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56240 <nil> <nil>}
	I1025 17:45:39.147114   66547 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 17:45:39.275394   66547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 17:45:39.275411   66547 machine.go:91] provisioned docker machine in 1.320517407s
	I1025 17:45:39.275417   66547 start.go:300] post-start starting for "functional-188000" (driver="docker")
	I1025 17:45:39.275429   66547 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 17:45:39.275513   66547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 17:45:39.275568   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:39.327545   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:45:39.418084   66547 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 17:45:39.422415   66547 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1025 17:45:39.422424   66547 command_runner.go:130] > NAME="Ubuntu"
	I1025 17:45:39.422428   66547 command_runner.go:130] > VERSION_ID="22.04"
	I1025 17:45:39.422437   66547 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1025 17:45:39.422443   66547 command_runner.go:130] > VERSION_CODENAME=jammy
	I1025 17:45:39.422446   66547 command_runner.go:130] > ID=ubuntu
	I1025 17:45:39.422450   66547 command_runner.go:130] > ID_LIKE=debian
	I1025 17:45:39.422454   66547 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1025 17:45:39.422459   66547 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1025 17:45:39.422468   66547 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1025 17:45:39.422475   66547 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1025 17:45:39.422479   66547 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1025 17:45:39.422525   66547 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 17:45:39.422543   66547 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 17:45:39.422550   66547 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 17:45:39.422563   66547 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 17:45:39.422572   66547 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 17:45:39.422663   66547 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 17:45:39.422807   66547 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 17:45:39.422815   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /etc/ssl/certs/652922.pem
	I1025 17:45:39.422963   66547 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/test/nested/copy/65292/hosts -> hosts in /etc/test/nested/copy/65292
	I1025 17:45:39.422969   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/test/nested/copy/65292/hosts -> /etc/test/nested/copy/65292/hosts
	I1025 17:45:39.423011   66547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/65292
	I1025 17:45:39.432101   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 17:45:39.454931   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/test/nested/copy/65292/hosts --> /etc/test/nested/copy/65292/hosts (40 bytes)
	I1025 17:45:39.478218   66547 start.go:303] post-start completed in 202.785073ms
	I1025 17:45:39.478295   66547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 17:45:39.478363   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:39.529399   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:45:39.616302   66547 command_runner.go:130] > 6%!
	(MISSING)I1025 17:45:39.616374   66547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 17:45:39.621796   66547 command_runner.go:130] > 92G
	I1025 17:45:39.622081   66547 fix.go:56] fixHost completed within 1.782885312s
	I1025 17:45:39.622096   66547 start.go:83] releasing machines lock for "functional-188000", held for 1.782924172s
	I1025 17:45:39.622178   66547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-188000
	I1025 17:45:39.674301   66547 ssh_runner.go:195] Run: cat /version.json
	I1025 17:45:39.674307   66547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 17:45:39.674382   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:39.674382   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:39.731160   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:45:39.731332   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:45:39.923267   66547 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1025 17:45:39.925517   66547 command_runner.go:130] > {"iso_version": "v1.31.0-1697471113-17434", "kicbase_version": "v0.0.40-1698055645-17423", "minikube_version": "v1.31.2", "commit": "585245745aba695f9444ad633713942a6eacd882"}
	I1025 17:45:39.925667   66547 ssh_runner.go:195] Run: systemctl --version
	I1025 17:45:39.930728   66547 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1025 17:45:39.930757   66547 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1025 17:45:39.931003   66547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 17:45:39.937006   66547 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1025 17:45:39.937030   66547 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I1025 17:45:39.937042   66547 command_runner.go:130] > Device: a4h/164d	Inode: 1066465     Links: 1
	I1025 17:45:39.937053   66547 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 17:45:39.937062   66547 command_runner.go:130] > Access: 2023-10-26 00:45:07.468112584 +0000
	I1025 17:45:39.937067   66547 command_runner.go:130] > Modify: 2023-10-26 00:45:07.442112582 +0000
	I1025 17:45:39.937071   66547 command_runner.go:130] > Change: 2023-10-26 00:45:07.442112582 +0000
	I1025 17:45:39.937076   66547 command_runner.go:130] >  Birth: 2023-10-26 00:45:07.442112582 +0000
	I1025 17:45:39.937254   66547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 17:45:39.957372   66547 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 17:45:39.957452   66547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 17:45:39.966943   66547 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 17:45:39.966955   66547 start.go:472] detecting cgroup driver to use...
	I1025 17:45:39.966970   66547 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 17:45:39.967079   66547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 17:45:39.983952   66547 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1025 17:45:39.984034   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 17:45:39.994911   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 17:45:40.005499   66547 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 17:45:40.005559   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 17:45:40.016325   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 17:45:40.026997   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 17:45:40.037495   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 17:45:40.048000   66547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 17:45:40.058293   66547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 17:45:40.069102   66547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 17:45:40.077883   66547 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1025 17:45:40.078751   66547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 17:45:40.087876   66547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 17:45:40.167094   66547 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 17:45:50.369793   66547 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.202347018s)
	I1025 17:45:50.369810   66547 start.go:472] detecting cgroup driver to use...
	I1025 17:45:50.369822   66547 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 17:45:50.369881   66547 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 17:45:50.390906   66547 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1025 17:45:50.391090   66547 command_runner.go:130] > [Unit]
	I1025 17:45:50.391103   66547 command_runner.go:130] > Description=Docker Application Container Engine
	I1025 17:45:50.391109   66547 command_runner.go:130] > Documentation=https://docs.docker.com
	I1025 17:45:50.391113   66547 command_runner.go:130] > BindsTo=containerd.service
	I1025 17:45:50.391120   66547 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1025 17:45:50.391124   66547 command_runner.go:130] > Wants=network-online.target
	I1025 17:45:50.391134   66547 command_runner.go:130] > Requires=docker.socket
	I1025 17:45:50.391140   66547 command_runner.go:130] > StartLimitBurst=3
	I1025 17:45:50.391145   66547 command_runner.go:130] > StartLimitIntervalSec=60
	I1025 17:45:50.391150   66547 command_runner.go:130] > [Service]
	I1025 17:45:50.391153   66547 command_runner.go:130] > Type=notify
	I1025 17:45:50.391157   66547 command_runner.go:130] > Restart=on-failure
	I1025 17:45:50.391163   66547 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1025 17:45:50.391188   66547 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1025 17:45:50.391200   66547 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1025 17:45:50.391215   66547 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1025 17:45:50.391229   66547 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1025 17:45:50.391242   66547 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1025 17:45:50.391248   66547 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1025 17:45:50.391259   66547 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1025 17:45:50.391266   66547 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1025 17:45:50.391269   66547 command_runner.go:130] > ExecStart=
	I1025 17:45:50.391280   66547 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1025 17:45:50.391286   66547 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1025 17:45:50.391292   66547 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1025 17:45:50.391299   66547 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1025 17:45:50.391307   66547 command_runner.go:130] > LimitNOFILE=infinity
	I1025 17:45:50.391310   66547 command_runner.go:130] > LimitNPROC=infinity
	I1025 17:45:50.391314   66547 command_runner.go:130] > LimitCORE=infinity
	I1025 17:45:50.391333   66547 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1025 17:45:50.391341   66547 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1025 17:45:50.391345   66547 command_runner.go:130] > TasksMax=infinity
	I1025 17:45:50.391348   66547 command_runner.go:130] > TimeoutStartSec=0
	I1025 17:45:50.391357   66547 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1025 17:45:50.391370   66547 command_runner.go:130] > Delegate=yes
	I1025 17:45:50.391385   66547 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1025 17:45:50.391399   66547 command_runner.go:130] > KillMode=process
	I1025 17:45:50.391416   66547 command_runner.go:130] > [Install]
	I1025 17:45:50.391423   66547 command_runner.go:130] > WantedBy=multi-user.target
	I1025 17:45:50.392381   66547 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 17:45:50.392455   66547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 17:45:50.405155   66547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 17:45:50.422541   66547 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 17:45:50.423432   66547 ssh_runner.go:195] Run: which cri-dockerd
	I1025 17:45:50.428150   66547 command_runner.go:130] > /usr/bin/cri-dockerd
	I1025 17:45:50.428262   66547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 17:45:50.438351   66547 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 17:45:50.459440   66547 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 17:45:50.562753   66547 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 17:45:50.662527   66547 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 17:45:50.662614   66547 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 17:45:50.680668   66547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 17:45:50.770208   66547 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 17:45:51.066885   66547 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 17:45:51.153020   66547 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 17:45:51.213077   66547 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 17:45:51.277059   66547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 17:45:51.341504   66547 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 17:45:51.374716   66547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 17:45:51.448574   66547 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 17:45:51.549016   66547 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 17:45:51.549109   66547 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 17:45:51.554558   66547 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1025 17:45:51.554572   66547 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1025 17:45:51.554577   66547 command_runner.go:130] > Device: ach/172d	Inode: 667         Links: 1
	I1025 17:45:51.554583   66547 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1025 17:45:51.554593   66547 command_runner.go:130] > Access: 2023-10-26 00:45:51.462295391 +0000
	I1025 17:45:51.554598   66547 command_runner.go:130] > Modify: 2023-10-26 00:45:51.462295391 +0000
	I1025 17:45:51.554615   66547 command_runner.go:130] > Change: 2023-10-26 00:45:51.483295392 +0000
	I1025 17:45:51.554620   66547 command_runner.go:130] >  Birth: 2023-10-26 00:45:51.462295391 +0000
	I1025 17:45:51.554637   66547 start.go:540] Will wait 60s for crictl version
	I1025 17:45:51.554688   66547 ssh_runner.go:195] Run: which crictl
	I1025 17:45:51.559315   66547 command_runner.go:130] > /usr/bin/crictl
	I1025 17:45:51.559370   66547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 17:45:51.606043   66547 command_runner.go:130] > Version:  0.1.0
	I1025 17:45:51.606056   66547 command_runner.go:130] > RuntimeName:  docker
	I1025 17:45:51.606060   66547 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1025 17:45:51.606068   66547 command_runner.go:130] > RuntimeApiVersion:  v1
	I1025 17:45:51.608169   66547 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 17:45:51.608268   66547 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 17:45:51.633657   66547 command_runner.go:130] > 24.0.6
	I1025 17:45:51.634797   66547 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 17:45:51.658929   66547 command_runner.go:130] > 24.0.6
	I1025 17:45:51.684604   66547 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 17:45:51.684756   66547 cli_runner.go:164] Run: docker exec -t functional-188000 dig +short host.docker.internal
	I1025 17:45:51.824395   66547 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 17:45:51.824498   66547 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 17:45:51.829703   66547 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1025 17:45:51.829845   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:51.881103   66547 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 17:45:51.881175   66547 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 17:45:51.900536   66547 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1025 17:45:51.900549   66547 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1025 17:45:51.900566   66547 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1025 17:45:51.900571   66547 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1025 17:45:51.900575   66547 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1025 17:45:51.900593   66547 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1025 17:45:51.900616   66547 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1025 17:45:51.900625   66547 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 17:45:51.901796   66547 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 17:45:51.901822   66547 docker.go:623] Images already preloaded, skipping extraction
	I1025 17:45:51.901910   66547 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 17:45:51.922427   66547 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1025 17:45:51.922440   66547 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1025 17:45:51.922444   66547 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1025 17:45:51.922450   66547 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1025 17:45:51.922470   66547 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1025 17:45:51.922479   66547 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1025 17:45:51.922484   66547 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1025 17:45:51.922492   66547 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 17:45:51.923705   66547 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 17:45:51.923726   66547 cache_images.go:84] Images are preloaded, skipping loading
	I1025 17:45:51.923805   66547 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 17:45:51.976031   66547 command_runner.go:130] > cgroupfs
	I1025 17:45:51.977337   66547 cni.go:84] Creating CNI manager for ""
	I1025 17:45:51.977352   66547 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 17:45:51.977368   66547 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 17:45:51.977381   66547 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-188000 NodeName:functional-188000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 17:45:51.977519   66547 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-188000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 17:45:51.977587   66547 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-188000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:functional-188000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1025 17:45:51.977652   66547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 17:45:51.986828   66547 command_runner.go:130] > kubeadm
	I1025 17:45:51.986840   66547 command_runner.go:130] > kubectl
	I1025 17:45:51.986844   66547 command_runner.go:130] > kubelet
	I1025 17:45:51.987608   66547 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 17:45:51.987670   66547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 17:45:51.996786   66547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 17:45:52.013733   66547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 17:45:52.031070   66547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1025 17:45:52.048179   66547 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 17:45:52.052588   66547 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1025 17:45:52.052630   66547 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000 for IP: 192.168.49.2
	I1025 17:45:52.052647   66547 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:45:52.052804   66547 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 17:45:52.052854   66547 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 17:45:52.052933   66547 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.key
	I1025 17:45:52.052995   66547 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/apiserver.key.dd3b5fb2
	I1025 17:45:52.053041   66547 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/proxy-client.key
	I1025 17:45:52.053050   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 17:45:52.053069   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 17:45:52.053094   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 17:45:52.053111   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 17:45:52.053128   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 17:45:52.053143   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 17:45:52.053171   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 17:45:52.053200   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 17:45:52.053305   66547 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 17:45:52.053339   66547 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 17:45:52.053350   66547 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 17:45:52.053381   66547 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 17:45:52.053412   66547 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 17:45:52.053453   66547 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 17:45:52.053521   66547 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 17:45:52.053552   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem -> /usr/share/ca-certificates/65292.pem
	I1025 17:45:52.053575   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /usr/share/ca-certificates/652922.pem
	I1025 17:45:52.053592   66547 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:45:52.054086   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 17:45:52.076922   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 17:45:52.099904   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 17:45:52.128904   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 17:45:52.152002   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 17:45:52.174881   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 17:45:52.197639   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 17:45:52.220319   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 17:45:52.243650   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 17:45:52.266877   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 17:45:52.289601   66547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 17:45:52.312621   66547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 17:45:52.329975   66547 ssh_runner.go:195] Run: openssl version
	I1025 17:45:52.335786   66547 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1025 17:45:52.336008   66547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 17:45:52.346331   66547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:45:52.351222   66547 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:45:52.351241   66547 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:45:52.351283   66547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:45:52.358052   66547 command_runner.go:130] > b5213941
	I1025 17:45:52.358206   66547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 17:45:52.367944   66547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 17:45:52.377852   66547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 17:45:52.382321   66547 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 17:45:52.382334   66547 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 17:45:52.382373   66547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 17:45:52.389461   66547 command_runner.go:130] > 51391683
	I1025 17:45:52.389678   66547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 17:45:52.399052   66547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 17:45:52.409097   66547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 17:45:52.413564   66547 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 17:45:52.413647   66547 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 17:45:52.413692   66547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 17:45:52.420528   66547 command_runner.go:130] > 3ec20f2e
	I1025 17:45:52.420684   66547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 17:45:52.430431   66547 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 17:45:52.435076   66547 command_runner.go:130] > ca.crt
	I1025 17:45:52.435086   66547 command_runner.go:130] > ca.key
	I1025 17:45:52.435090   66547 command_runner.go:130] > healthcheck-client.crt
	I1025 17:45:52.435094   66547 command_runner.go:130] > healthcheck-client.key
	I1025 17:45:52.435099   66547 command_runner.go:130] > peer.crt
	I1025 17:45:52.435103   66547 command_runner.go:130] > peer.key
	I1025 17:45:52.435106   66547 command_runner.go:130] > server.crt
	I1025 17:45:52.435109   66547 command_runner.go:130] > server.key
	I1025 17:45:52.435173   66547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 17:45:52.442174   66547 command_runner.go:130] > Certificate will not expire
	I1025 17:45:52.442317   66547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 17:45:52.449012   66547 command_runner.go:130] > Certificate will not expire
	I1025 17:45:52.449196   66547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 17:45:52.455985   66547 command_runner.go:130] > Certificate will not expire
	I1025 17:45:52.456216   66547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 17:45:52.462741   66547 command_runner.go:130] > Certificate will not expire
	I1025 17:45:52.462923   66547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 17:45:52.469578   66547 command_runner.go:130] > Certificate will not expire
	I1025 17:45:52.469923   66547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 17:45:52.476788   66547 command_runner.go:130] > Certificate will not expire
	I1025 17:45:52.476832   66547 kubeadm.go:404] StartCluster: {Name:functional-188000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-188000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:45:52.476938   66547 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 17:45:52.496829   66547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 17:45:52.506504   66547 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1025 17:45:52.506515   66547 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1025 17:45:52.506520   66547 command_runner.go:130] > /var/lib/minikube/etcd:
	I1025 17:45:52.506523   66547 command_runner.go:130] > member
	I1025 17:45:52.506533   66547 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1025 17:45:52.506546   66547 kubeadm.go:636] restartCluster start
	I1025 17:45:52.506595   66547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 17:45:52.515848   66547 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:52.515931   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-188000
	I1025 17:45:52.569559   66547 kubeconfig.go:92] found "functional-188000" server: "https://127.0.0.1:56239"
	I1025 17:45:52.569934   66547 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:45:52.570126   66547 kapi.go:59] client config for functional-188000: &rest.Config{Host:"https://127.0.0.1:56239", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 17:45:52.570622   66547 cert_rotation.go:137] Starting client certificate rotation controller
	I1025 17:45:52.570796   66547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 17:45:52.580349   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:52.580404   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:52.591038   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:52.591057   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:52.591103   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:52.601570   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:53.101963   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:53.102218   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:53.115033   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:53.603778   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:53.604021   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:53.616635   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:54.102204   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:54.102301   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:54.114997   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:54.601714   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:54.601856   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:54.613179   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:55.103783   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:55.104030   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:55.116743   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:55.601962   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:55.602083   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:55.614008   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:56.103789   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:56.103986   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:56.117349   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:56.601950   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:56.602214   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:56.614821   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:57.101775   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:57.111517   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:57.143044   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:57.602179   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:57.602282   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 17:45:57.643548   66547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:58.102800   66547 api_server.go:166] Checking apiserver status ...
	I1025 17:45:58.102905   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 17:45:58.146513   66547 command_runner.go:130] > 5688
	I1025 17:45:58.146615   66547 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5688/cgroup
	W1025 17:45:58.228791   66547 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5688/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 17:45:58.228907   66547 ssh_runner.go:195] Run: ls
	I1025 17:45:58.237805   66547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56239/healthz ...
	I1025 17:46:00.358801   66547 api_server.go:279] https://127.0.0.1:56239/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 17:46:00.358845   66547 retry.go:31] will retry after 251.807756ms: https://127.0.0.1:56239/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 17:46:00.611423   66547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56239/healthz ...
	I1025 17:46:00.629498   66547 api_server.go:279] https://127.0.0.1:56239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 17:46:00.629530   66547 retry.go:31] will retry after 358.051127ms: https://127.0.0.1:56239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 17:46:00.989350   66547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56239/healthz ...
	I1025 17:46:00.996835   66547 api_server.go:279] https://127.0.0.1:56239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 17:46:00.996858   66547 retry.go:31] will retry after 308.790425ms: https://127.0.0.1:56239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 17:46:01.307739   66547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56239/healthz ...
	I1025 17:46:01.314935   66547 api_server.go:279] https://127.0.0.1:56239/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 17:46:01.314957   66547 retry.go:31] will retry after 445.51233ms: https://127.0.0.1:56239/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 17:46:01.761530   66547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56239/healthz ...
	I1025 17:46:01.770260   66547 api_server.go:279] https://127.0.0.1:56239/healthz returned 200:
	ok
	I1025 17:46:01.770401   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods
	I1025 17:46:01.770408   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:01.770419   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:01.770427   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:01.829087   66547 round_trippers.go:574] Response Status: 200 OK in 58 milliseconds
	I1025 17:46:01.829154   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:01.829170   66547 round_trippers.go:580]     Audit-Id: b7d1cf3e-c721-48a5-bbe0-244b7bd61c9e
	I1025 17:46:01.829183   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:01.829192   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:01.829200   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:01.829206   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:01.829214   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:01 GMT
	I1025 17:46:01.829779   66547 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"399"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ff5ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7022509e-429b-40a1-95e2-ac3b980b2b1e","resourceVersion":"395","creationTimestamp":"2023-10-26T00:45:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ef2c2cc4-097f-444f-b52c-dfc3304565b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef2c2cc4-097f-444f-b52c-dfc3304565b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51058 chars]
	I1025 17:46:01.832402   66547 system_pods.go:86] 7 kube-system pods found
	I1025 17:46:01.832414   66547 system_pods.go:89] "coredns-5dd5756b68-ff5ll" [7022509e-429b-40a1-95e2-ac3b980b2b1e] Running
	I1025 17:46:01.832420   66547 system_pods.go:89] "etcd-functional-188000" [095a6b2c-e973-4dad-9409-01e79c7e3021] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 17:46:01.832426   66547 system_pods.go:89] "kube-apiserver-functional-188000" [6811c037-9ba7-49b2-9dc8-e7c835a205ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 17:46:01.832432   66547 system_pods.go:89] "kube-controller-manager-functional-188000" [000afba9-c176-4b7f-9674-24c20b7b1e92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 17:46:01.832441   66547 system_pods.go:89] "kube-proxy-bnvpn" [35c2ae14-426f-4a44-b88e-d3d88befe16f] Running
	I1025 17:46:01.832451   66547 system_pods.go:89] "kube-scheduler-functional-188000" [ac7541cf-a304-4933-acea-37c4f53f6710] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 17:46:01.832471   66547 system_pods.go:89] "storage-provisioner" [6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 17:46:01.832513   66547 round_trippers.go:463] GET https://127.0.0.1:56239/version
	I1025 17:46:01.832520   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:01.832528   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:01.832537   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:01.834045   66547 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 17:46:01.834055   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:01.834060   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:01 GMT
	I1025 17:46:01.834064   66547 round_trippers.go:580]     Audit-Id: a3d04dca-a785-46c9-93ef-676f69eaa058
	I1025 17:46:01.834069   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:01.834074   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:01.834082   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:01.834087   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:01.834092   66547 round_trippers.go:580]     Content-Length: 264
	I1025 17:46:01.834103   66547 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1025 17:46:01.834143   66547 api_server.go:141] control plane version: v1.28.3
	I1025 17:46:01.834151   66547 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I1025 17:46:01.834158   66547 kubeadm.go:684] Taking a shortcut, as the cluster seems to be properly configured
	I1025 17:46:01.834167   66547 kubeadm.go:640] restartCluster took 9.32733389s
	I1025 17:46:01.834173   66547 kubeadm.go:406] StartCluster complete in 9.357063265s
	I1025 17:46:01.834183   66547 settings.go:142] acquiring lock: {Name:mkca0a8fe84aa865309571104a1d51551b90d38c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:46:01.834265   66547 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:46:01.834706   66547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/kubeconfig: {Name:mka2fd80159d21a18312620daab0f942465327a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:46:01.834987   66547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 17:46:01.835003   66547 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 17:46:01.835042   66547 addons.go:69] Setting default-storageclass=true in profile "functional-188000"
	I1025 17:46:01.835058   66547 addons.go:69] Setting storage-provisioner=true in profile "functional-188000"
	I1025 17:46:01.835060   66547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-188000"
	I1025 17:46:01.835070   66547 addons.go:231] Setting addon storage-provisioner=true in "functional-188000"
	W1025 17:46:01.835074   66547 addons.go:240] addon storage-provisioner should already be in state true
	I1025 17:46:01.835111   66547 host.go:66] Checking if "functional-188000" exists ...
	I1025 17:46:01.835140   66547 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 17:46:01.835365   66547 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:46:01.835397   66547 cli_runner.go:164] Run: docker container inspect functional-188000 --format={{.State.Status}}
	I1025 17:46:01.835424   66547 cli_runner.go:164] Run: docker container inspect functional-188000 --format={{.State.Status}}
	I1025 17:46:01.836001   66547 kapi.go:59] client config for functional-188000: &rest.Config{Host:"https://127.0.0.1:56239", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 17:46:01.839059   66547 round_trippers.go:463] GET https://127.0.0.1:56239/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 17:46:01.839364   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:01.839372   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:01.839377   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:01.842538   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:01.842551   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:01.842556   66547 round_trippers.go:580]     Content-Length: 291
	I1025 17:46:01.842567   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:01 GMT
	I1025 17:46:01.842572   66547 round_trippers.go:580]     Audit-Id: 4f69ffef-e8dd-40ea-b79d-626c9b31a1c9
	I1025 17:46:01.842576   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:01.842580   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:01.842584   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:01.842588   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:01.842608   66547 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f89c5d55-82d9-44ba-90e8-9c480cde91ad","resourceVersion":"378","creationTimestamp":"2023-10-26T00:45:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1025 17:46:01.842732   66547 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-188000" context rescaled to 1 replicas
	I1025 17:46:01.842754   66547 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 17:46:01.865936   66547 out.go:177] * Verifying Kubernetes components...
	I1025 17:46:01.909141   66547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 17:46:01.915352   66547 command_runner.go:130] > apiVersion: v1
	I1025 17:46:01.915369   66547 command_runner.go:130] > data:
	I1025 17:46:01.915374   66547 command_runner.go:130] >   Corefile: |
	I1025 17:46:01.915381   66547 command_runner.go:130] >     .:53 {
	I1025 17:46:01.915387   66547 command_runner.go:130] >         log
	I1025 17:46:01.915396   66547 command_runner.go:130] >         errors
	I1025 17:46:01.915409   66547 command_runner.go:130] >         health {
	I1025 17:46:01.915422   66547 command_runner.go:130] >            lameduck 5s
	I1025 17:46:01.915429   66547 command_runner.go:130] >         }
	I1025 17:46:01.915437   66547 command_runner.go:130] >         ready
	I1025 17:46:01.915446   66547 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1025 17:46:01.915452   66547 command_runner.go:130] >            pods insecure
	I1025 17:46:01.915462   66547 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1025 17:46:01.915470   66547 command_runner.go:130] >            ttl 30
	I1025 17:46:01.915475   66547 command_runner.go:130] >         }
	I1025 17:46:01.915481   66547 command_runner.go:130] >         prometheus :9153
	I1025 17:46:01.915486   66547 command_runner.go:130] >         hosts {
	I1025 17:46:01.915493   66547 command_runner.go:130] >            192.168.65.254 host.minikube.internal
	I1025 17:46:01.915498   66547 command_runner.go:130] >            fallthrough
	I1025 17:46:01.915503   66547 command_runner.go:130] >         }
	I1025 17:46:01.915509   66547 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1025 17:46:01.915517   66547 command_runner.go:130] >            max_concurrent 1000
	I1025 17:46:01.915526   66547 command_runner.go:130] >         }
	I1025 17:46:01.915535   66547 command_runner.go:130] >         cache 30
	I1025 17:46:01.915549   66547 command_runner.go:130] >         loop
	I1025 17:46:01.915570   66547 command_runner.go:130] >         reload
	I1025 17:46:01.915579   66547 command_runner.go:130] >         loadbalance
	I1025 17:46:01.915583   66547 command_runner.go:130] >     }
	I1025 17:46:01.915587   66547 command_runner.go:130] > kind: ConfigMap
	I1025 17:46:01.915590   66547 command_runner.go:130] > metadata:
	I1025 17:46:01.915594   66547 command_runner.go:130] >   creationTimestamp: "2023-10-26T00:45:19Z"
	I1025 17:46:01.915599   66547 command_runner.go:130] >   name: coredns
	I1025 17:46:01.915603   66547 command_runner.go:130] >   namespace: kube-system
	I1025 17:46:01.915607   66547 command_runner.go:130] >   resourceVersion: "345"
	I1025 17:46:01.915611   66547 command_runner.go:130] >   uid: 48ba2b39-a203-4bbd-acc5-ba1dc75f42a6
	I1025 17:46:01.915686   66547 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1025 17:46:01.938009   66547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 17:46:01.917693   66547 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:46:01.923980   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-188000
	I1025 17:46:01.938222   66547 kapi.go:59] client config for functional-188000: &rest.Config{Host:"https://127.0.0.1:56239", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 17:46:01.959114   66547 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 17:46:01.959134   66547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 17:46:01.959234   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:46:01.960405   66547 addons.go:231] Setting addon default-storageclass=true in "functional-188000"
	W1025 17:46:01.960518   66547 addons.go:240] addon default-storageclass should already be in state true
	I1025 17:46:01.960578   66547 host.go:66] Checking if "functional-188000" exists ...
	I1025 17:46:01.963542   66547 cli_runner.go:164] Run: docker container inspect functional-188000 --format={{.State.Status}}
	I1025 17:46:02.020144   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:46:02.020152   66547 node_ready.go:35] waiting up to 6m0s for node "functional-188000" to be "Ready" ...
	I1025 17:46:02.020260   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:02.020291   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.020298   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.020303   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.020606   66547 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 17:46:02.020617   66547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 17:46:02.020688   66547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
	I1025 17:46:02.024270   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:02.024294   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.024300   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.024305   66547 round_trippers.go:580]     Audit-Id: d5cd2377-1809-4d0c-9a6e-e21462f60e30
	I1025 17:46:02.024310   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.024315   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.024321   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.024327   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.024415   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:02.024893   66547 node_ready.go:49] node "functional-188000" has status "Ready":"True"
	I1025 17:46:02.024907   66547 node_ready.go:38] duration metric: took 4.722781ms waiting for node "functional-188000" to be "Ready" ...
	I1025 17:46:02.024915   66547 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 17:46:02.024974   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods
	I1025 17:46:02.024979   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.024986   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.024991   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.028922   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:02.028944   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.028957   66547 round_trippers.go:580]     Audit-Id: 6870b210-07a0-4cdf-895d-dc2f65b016ec
	I1025 17:46:02.028999   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.029017   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.029036   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.029046   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.029057   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.029782   66547 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"399"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ff5ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7022509e-429b-40a1-95e2-ac3b980b2b1e","resourceVersion":"395","creationTimestamp":"2023-10-26T00:45:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ef2c2cc4-097f-444f-b52c-dfc3304565b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef2c2cc4-097f-444f-b52c-dfc3304565b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51058 chars]
	I1025 17:46:02.031320   66547 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ff5ll" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:02.031393   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ff5ll
	I1025 17:46:02.031404   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.031414   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.031420   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.034955   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:02.034983   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.034995   66547 round_trippers.go:580]     Audit-Id: 8941c6e4-7e83-4117-92dc-ff11d13d7b99
	I1025 17:46:02.035007   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.035022   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.035034   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.035040   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.035045   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.035336   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ff5ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7022509e-429b-40a1-95e2-ac3b980b2b1e","resourceVersion":"395","creationTimestamp":"2023-10-26T00:45:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ef2c2cc4-097f-444f-b52c-dfc3304565b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef2c2cc4-097f-444f-b52c-dfc3304565b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6154 chars]
	I1025 17:46:02.035679   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:02.035687   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.035695   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.035700   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.038803   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:02.038816   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.038822   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.038827   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.038831   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.038836   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.038841   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.038847   66547 round_trippers.go:580]     Audit-Id: fa3499a0-907a-4f69-bf58-86d859239ead
	I1025 17:46:02.038908   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:02.039127   66547 pod_ready.go:92] pod "coredns-5dd5756b68-ff5ll" in "kube-system" namespace has status "Ready":"True"
	I1025 17:46:02.039136   66547 pod_ready.go:81] duration metric: took 7.80005ms waiting for pod "coredns-5dd5756b68-ff5ll" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:02.039144   66547 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:02.039185   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:02.039190   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.039197   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.039202   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.042909   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:02.042923   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.042929   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.042941   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.042946   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.042952   66547 round_trippers.go:580]     Audit-Id: e612e10d-4453-4014-b3c7-1e0574e7662a
	I1025 17:46:02.042967   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.042973   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.043042   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:02.043364   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:02.043371   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.043378   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.043384   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.046303   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:02.046315   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.046321   66547 round_trippers.go:580]     Audit-Id: 3c8df985-1317-473c-97c0-a64ffede3a3f
	I1025 17:46:02.046328   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.046334   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.046339   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.046344   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.046349   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.046410   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:02.046661   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:02.046668   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.046675   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.046681   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.049321   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:02.049336   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.049341   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.049350   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.049355   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.049360   66547 round_trippers.go:580]     Audit-Id: f2334a50-e7dd-4955-96e5-7c1c426c0d9f
	I1025 17:46:02.049365   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.049370   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.049460   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:02.049785   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:02.049795   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.049802   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.049808   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.052679   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:02.052692   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.052701   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.052706   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.052712   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.052717   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.052722   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.052728   66547 round_trippers.go:580]     Audit-Id: d69bff72-c316-4f60-81c0-4d35e5ba8bbe
	I1025 17:46:02.052794   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:02.079377   66547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
	I1025 17:46:02.122395   66547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 17:46:02.183318   66547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 17:46:02.553345   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:02.553364   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.553374   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.553385   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.558558   66547 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 17:46:02.558580   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.558588   66547 round_trippers.go:580]     Audit-Id: 949d03a1-ebf5-4f0c-a246-3dd81e422c32
	I1025 17:46:02.558593   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.558598   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.558609   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.558617   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.558622   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.558718   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:02.559048   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:02.559060   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:02.559071   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:02.559086   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:02.562217   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:02.562240   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:02.562246   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:02.562251   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:02.562255   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:02.562260   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:02 GMT
	I1025 17:46:02.562264   66547 round_trippers.go:580]     Audit-Id: 79bd8c4c-fd44-4540-9a4f-27fec52f6fdd
	I1025 17:46:02.562275   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:02.562364   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:03.039187   66547 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I1025 17:46:03.041977   66547 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I1025 17:46:03.045205   66547 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1025 17:46:03.048248   66547 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1025 17:46:03.050564   66547 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I1025 17:46:03.053559   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:03.053568   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:03.053575   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:03.053581   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:03.056365   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:03.056390   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:03.056414   66547 round_trippers.go:580]     Audit-Id: f68121e6-0fd6-41c4-9cb9-e5b7c81b06d1
	I1025 17:46:03.056430   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:03.056436   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:03.056442   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:03.056448   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:03.056454   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:03 GMT
	I1025 17:46:03.056534   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:03.056810   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:03.056817   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:03.056827   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:03.056839   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:03.057384   66547 command_runner.go:130] > pod/storage-provisioner configured
	I1025 17:46:03.059444   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:03.059454   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:03.059460   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:03.059464   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:03.059469   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:03 GMT
	I1025 17:46:03.059476   66547 round_trippers.go:580]     Audit-Id: 313be3bd-5566-495d-b5fa-8963c57c9536
	I1025 17:46:03.059483   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:03.059491   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:03.059635   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:03.061389   66547 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I1025 17:46:03.061456   66547 round_trippers.go:463] GET https://127.0.0.1:56239/apis/storage.k8s.io/v1/storageclasses
	I1025 17:46:03.061461   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:03.061467   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:03.061473   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:03.063978   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:03.063986   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:03.063992   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:03.063996   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:03.064001   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:03.064006   66547 round_trippers.go:580]     Content-Length: 1273
	I1025 17:46:03.064011   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:03 GMT
	I1025 17:46:03.064016   66547 round_trippers.go:580]     Audit-Id: d00f9678-751d-484f-bad3-97f3f59a72d1
	I1025 17:46:03.064021   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:03.064040   66547 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"458"},"items":[{"metadata":{"name":"standard","uid":"6d60752b-781b-494a-b9bb-a1159bed062b","resourceVersion":"344","creationTimestamp":"2023-10-26T00:45:33Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T00:45:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1025 17:46:03.064343   66547 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"6d60752b-781b-494a-b9bb-a1159bed062b","resourceVersion":"344","creationTimestamp":"2023-10-26T00:45:33Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T00:45:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 17:46:03.064369   66547 round_trippers.go:463] PUT https://127.0.0.1:56239/apis/storage.k8s.io/v1/storageclasses/standard
	I1025 17:46:03.064373   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:03.064380   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:03.064386   66547 round_trippers.go:473]     Content-Type: application/json
	I1025 17:46:03.064390   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:03.067390   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:03.067406   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:03.067412   66547 round_trippers.go:580]     Content-Length: 1220
	I1025 17:46:03.067418   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:03 GMT
	I1025 17:46:03.067422   66547 round_trippers.go:580]     Audit-Id: 57315fd0-28a8-4710-8cf7-1645ee03e1a6
	I1025 17:46:03.067428   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:03.067433   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:03.067437   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:03.067442   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:03.067462   66547 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"6d60752b-781b-494a-b9bb-a1159bed062b","resourceVersion":"344","creationTimestamp":"2023-10-26T00:45:33Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T00:45:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 17:46:03.113094   66547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 17:46:03.134830   66547 addons.go:502] enable addons completed in 1.299788878s: enabled=[storage-provisioner default-storageclass]
	I1025 17:46:03.553574   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:03.553590   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:03.553596   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:03.553601   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:03.556296   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:03.556307   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:03.556312   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:03.556317   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:03 GMT
	I1025 17:46:03.556322   66547 round_trippers.go:580]     Audit-Id: 865eb1ed-d7d6-46f2-8407-56534b3b7398
	I1025 17:46:03.556326   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:03.556331   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:03.556336   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:03.556418   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:03.556664   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:03.556670   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:03.556675   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:03.556680   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:03.559012   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:03.559022   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:03.559027   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:03.559037   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:03 GMT
	I1025 17:46:03.559043   66547 round_trippers.go:580]     Audit-Id: 49b5759c-6b3f-4239-b484-305075b508db
	I1025 17:46:03.559047   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:03.559052   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:03.559057   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:03.559107   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:04.055351   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:04.055372   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:04.055384   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:04.055393   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:04.059135   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:04.059146   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:04.059151   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:04.059156   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:04.059161   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:04.059166   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:04 GMT
	I1025 17:46:04.059170   66547 round_trippers.go:580]     Audit-Id: e6f7d332-9c72-4d6c-87ad-ec6327c1f9ff
	I1025 17:46:04.059175   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:04.059276   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:04.059539   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:04.059548   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:04.059554   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:04.059560   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:04.061899   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:04.061909   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:04.061914   66547 round_trippers.go:580]     Audit-Id: 970abfcd-d870-4d67-b2de-7f4be1b88964
	I1025 17:46:04.061925   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:04.061930   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:04.061935   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:04.061940   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:04.061944   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:04 GMT
	I1025 17:46:04.062121   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:04.062297   66547 pod_ready.go:102] pod "etcd-functional-188000" in "kube-system" namespace has status "Ready":"False"
	I1025 17:46:04.553311   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:04.553329   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:04.553338   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:04.553345   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:04.557062   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:04.557075   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:04.557081   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:04.557086   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:04.557115   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:04 GMT
	I1025 17:46:04.557126   66547 round_trippers.go:580]     Audit-Id: f4703e77-a3c3-4051-850e-1da515e3b30f
	I1025 17:46:04.557134   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:04.557140   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:04.557336   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:04.557644   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:04.557659   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:04.557670   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:04.557679   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:04.560596   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:04.560609   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:04.560615   66547 round_trippers.go:580]     Audit-Id: 10cf3519-abd2-4c64-a7e5-e86a4e1830aa
	I1025 17:46:04.560620   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:04.560625   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:04.560629   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:04.560634   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:04.560639   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:04 GMT
	I1025 17:46:04.560697   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:05.053293   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:05.053318   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:05.053340   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:05.053351   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:05.057813   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:05.057825   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:05.057831   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:05.057841   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:05.057846   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:05.057851   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:05.057856   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:05 GMT
	I1025 17:46:05.057864   66547 round_trippers.go:580]     Audit-Id: 6b7747e2-11d7-4e3d-a701-db49b78b9b6b
	I1025 17:46:05.057948   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:05.058212   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:05.058222   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:05.058228   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:05.058233   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:05.060850   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:05.060859   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:05.060871   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:05 GMT
	I1025 17:46:05.060877   66547 round_trippers.go:580]     Audit-Id: bc40d7ca-3032-4389-999b-37cbb81a09cc
	I1025 17:46:05.060881   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:05.060886   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:05.060891   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:05.060895   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:05.060946   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:05.553727   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:05.553747   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:05.553765   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:05.553775   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:05.557499   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:05.557517   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:05.557523   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:05.557528   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:05.557533   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:05.557537   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:05.557542   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:05 GMT
	I1025 17:46:05.557552   66547 round_trippers.go:580]     Audit-Id: 5708742a-e204-41ef-a503-668201cc4ef7
	I1025 17:46:05.557636   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:05.557889   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:05.557897   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:05.557903   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:05.557907   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:05.560643   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:05.560653   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:05.560659   66547 round_trippers.go:580]     Audit-Id: 174c83a1-fe18-466e-9ab9-e9693657189c
	I1025 17:46:05.560664   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:05.560668   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:05.560674   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:05.560680   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:05.560686   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:05 GMT
	I1025 17:46:05.560732   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:06.054772   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:06.054794   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:06.054806   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:06.054816   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:06.059228   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:06.059241   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:06.059246   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:06.059251   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:06 GMT
	I1025 17:46:06.059256   66547 round_trippers.go:580]     Audit-Id: e34061de-7f6b-4947-8973-e6ee6078f6aa
	I1025 17:46:06.059267   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:06.059273   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:06.059277   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:06.059347   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:06.059591   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:06.059599   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:06.059604   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:06.059609   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:06.061895   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:06.061906   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:06.061912   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:06 GMT
	I1025 17:46:06.061917   66547 round_trippers.go:580]     Audit-Id: 35218bf7-13b4-4920-9749-41fdcd46c00d
	I1025 17:46:06.061922   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:06.061926   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:06.061930   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:06.061938   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:06.061991   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:06.553589   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:06.553614   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:06.553626   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:06.553635   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:06.557639   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:06.557657   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:06.557669   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:06.557682   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:06 GMT
	I1025 17:46:06.557689   66547 round_trippers.go:580]     Audit-Id: ab3faf48-bfaf-4343-a8f5-23f9c1f06ea3
	I1025 17:46:06.557695   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:06.557700   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:06.557704   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:06.557784   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:06.558060   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:06.558070   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:06.558079   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:06.558104   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:06.561227   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:06.561239   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:06.561253   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:06.561262   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:06.561266   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:06.561272   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:06.561277   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:06 GMT
	I1025 17:46:06.561283   66547 round_trippers.go:580]     Audit-Id: fe3f9726-95d4-4d20-91c2-d022bdf6c86b
	I1025 17:46:06.561345   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:06.561547   66547 pod_ready.go:102] pod "etcd-functional-188000" in "kube-system" namespace has status "Ready":"False"
	I1025 17:46:07.053649   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:07.053671   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:07.053683   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:07.053693   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:07.058305   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:07.058317   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:07.058323   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:07.058329   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:07.058333   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:07.058339   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:07 GMT
	I1025 17:46:07.058349   66547 round_trippers.go:580]     Audit-Id: 00a571b5-5f08-42f3-9c43-8399a1c77c52
	I1025 17:46:07.058354   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:07.058443   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:07.058711   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:07.058717   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:07.058725   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:07.058731   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:07.061317   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:07.061327   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:07.061334   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:07.061342   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:07.061349   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:07.061354   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:07 GMT
	I1025 17:46:07.061358   66547 round_trippers.go:580]     Audit-Id: e79130af-3e90-410d-83fa-b541da60e340
	I1025 17:46:07.061363   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:07.061422   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:07.554161   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:07.554183   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:07.554194   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:07.554204   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:07.558461   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:07.558480   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:07.558488   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:07.558494   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:07 GMT
	I1025 17:46:07.558501   66547 round_trippers.go:580]     Audit-Id: d53e21b6-ed1d-43b0-81ca-9e3beed52379
	I1025 17:46:07.558507   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:07.558513   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:07.558519   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:07.558614   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:07.558958   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:07.558981   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:07.558987   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:07.558992   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:07.561252   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:07.561262   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:07.561267   66547 round_trippers.go:580]     Audit-Id: 1c958ae0-3e23-48ae-a573-40b9d5022235
	I1025 17:46:07.561271   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:07.561277   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:07.561281   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:07.561289   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:07.561293   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:07 GMT
	I1025 17:46:07.561344   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:08.054101   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:08.054124   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:08.054136   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:08.054146   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:08.058386   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:08.058398   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:08.058404   66547 round_trippers.go:580]     Audit-Id: 887ddb4c-6b63-4cf1-bbcc-ced84996b1f1
	I1025 17:46:08.058408   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:08.058413   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:08.058419   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:08.058423   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:08.058427   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:08 GMT
	I1025 17:46:08.058562   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:08.058831   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:08.058839   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:08.058853   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:08.058859   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:08.061229   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:08.061240   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:08.061246   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:08.061256   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:08.061262   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:08 GMT
	I1025 17:46:08.061267   66547 round_trippers.go:580]     Audit-Id: 8f1c8539-2e1b-4b8a-9422-80a903b3915b
	I1025 17:46:08.061272   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:08.061276   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:08.061332   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:08.553462   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:08.553484   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:08.553496   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:08.553506   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:08.557800   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:08.557812   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:08.557818   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:08.557822   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:08.557826   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:08 GMT
	I1025 17:46:08.557831   66547 round_trippers.go:580]     Audit-Id: 2ea19256-c604-487c-b853-9efdfeb7e08c
	I1025 17:46:08.557836   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:08.557841   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:08.557922   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:08.558180   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:08.558186   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:08.558192   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:08.558196   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:08.560778   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:08.560788   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:08.560793   66547 round_trippers.go:580]     Audit-Id: cb66681b-0282-480b-8414-d815083c64de
	I1025 17:46:08.560798   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:08.560805   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:08.560810   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:08.560815   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:08.560826   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:08 GMT
	I1025 17:46:08.560972   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:09.055448   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:09.055470   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:09.055482   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:09.055492   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:09.060067   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:09.060081   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:09.060089   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:09.060094   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:09.060107   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:09 GMT
	I1025 17:46:09.060113   66547 round_trippers.go:580]     Audit-Id: c3239eea-8614-4f5c-9fd9-aa6c2c1c4bf7
	I1025 17:46:09.060117   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:09.060122   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:09.060233   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:09.060488   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:09.060495   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:09.060502   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:09.060509   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:09.062675   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:09.062687   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:09.062695   66547 round_trippers.go:580]     Audit-Id: 38bbad82-d6d0-42fe-9dab-bdcbd7960a0c
	I1025 17:46:09.062702   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:09.062709   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:09.062714   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:09.062718   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:09.062723   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:09 GMT
	I1025 17:46:09.062839   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:09.063014   66547 pod_ready.go:102] pod "etcd-functional-188000" in "kube-system" namespace has status "Ready":"False"
	I1025 17:46:09.553472   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:09.553489   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:09.553497   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:09.553505   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:09.557011   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:09.557022   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:09.557028   66547 round_trippers.go:580]     Audit-Id: 659aab0a-50a9-49f9-8180-a6663a129ec7
	I1025 17:46:09.557036   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:09.557042   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:09.557047   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:09.557051   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:09.557056   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:09 GMT
	I1025 17:46:09.557142   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:09.557393   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:09.557400   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:09.557405   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:09.557411   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:09.559760   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:09.559770   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:09.559775   66547 round_trippers.go:580]     Audit-Id: 7b30c08e-98b0-4ee4-a631-e83b1e716d93
	I1025 17:46:09.559780   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:09.559785   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:09.559793   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:09.559799   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:09.559803   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:09 GMT
	I1025 17:46:09.559852   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:10.054170   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:10.054187   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:10.054196   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:10.054203   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:10.057437   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:10.057448   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:10.057454   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:10 GMT
	I1025 17:46:10.057459   66547 round_trippers.go:580]     Audit-Id: 280b56f7-61a4-4ad8-be90-a83e28ef83df
	I1025 17:46:10.057463   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:10.057469   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:10.057473   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:10.057478   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:10.057556   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:10.057814   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:10.057821   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:10.057828   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:10.057835   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:10.060112   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:10.060121   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:10.060126   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:10.060139   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:10.060145   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:10.060149   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:10 GMT
	I1025 17:46:10.060154   66547 round_trippers.go:580]     Audit-Id: a696ff4d-c2f9-4026-9955-092b57e65c55
	I1025 17:46:10.060159   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:10.060211   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:10.554186   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:10.554208   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:10.554222   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:10.554232   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:10.557308   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:10.557325   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:10.557331   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:10.557335   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:10.557340   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:10.557345   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:10 GMT
	I1025 17:46:10.557350   66547 round_trippers.go:580]     Audit-Id: 7f7d0ae3-d12a-4554-a1b4-1d882384e1b8
	I1025 17:46:10.557355   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:10.557442   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:10.557719   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:10.557725   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:10.557731   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:10.557736   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:10.560336   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:10.560347   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:10.560352   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:10.560357   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:10.560361   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:10.560366   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:10.560371   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:10 GMT
	I1025 17:46:10.560375   66547 round_trippers.go:580]     Audit-Id: 8094dea4-4cda-4630-94c7-50b17b72ab31
	I1025 17:46:10.560430   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:11.055174   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:11.055195   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:11.055207   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:11.055216   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:11.059613   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:11.059626   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:11.059631   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:11 GMT
	I1025 17:46:11.059635   66547 round_trippers.go:580]     Audit-Id: e5eef706-ceab-4449-88b2-5cebb50464e7
	I1025 17:46:11.059640   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:11.059645   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:11.059650   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:11.059654   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:11.059772   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:11.060020   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:11.060026   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:11.060032   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:11.060037   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:11.062410   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:11.062420   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:11.062426   66547 round_trippers.go:580]     Audit-Id: 2adee1dd-0f79-47c6-835f-d66df593e0d7
	I1025 17:46:11.062431   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:11.062436   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:11.062441   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:11.062446   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:11.062450   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:11 GMT
	I1025 17:46:11.062508   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:11.555434   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:11.555453   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:11.555465   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:11.555492   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:11.558771   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:11.558782   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:11.558788   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:11.558797   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:11.558802   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:11.558811   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:11 GMT
	I1025 17:46:11.558816   66547 round_trippers.go:580]     Audit-Id: e2b3f75d-6c3c-4acf-a15a-bf92f30adea2
	I1025 17:46:11.558820   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:11.558918   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:11.559179   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:11.559185   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:11.559191   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:11.559197   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:11.561672   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:11.561682   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:11.561687   66547 round_trippers.go:580]     Audit-Id: df3a6688-65b5-43f6-86e3-cd7c20e21f53
	I1025 17:46:11.561693   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:11.561698   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:11.561702   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:11.561707   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:11.561712   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:11 GMT
	I1025 17:46:11.561781   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:11.561975   66547 pod_ready.go:102] pod "etcd-functional-188000" in "kube-system" namespace has status "Ready":"False"
	I1025 17:46:12.055441   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:12.055461   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:12.055472   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:12.055481   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:12.059828   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:12.059843   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:12.059850   66547 round_trippers.go:580]     Audit-Id: e8a2e750-7848-43c8-b283-bc7345156427
	I1025 17:46:12.059857   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:12.059864   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:12.059871   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:12.059877   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:12.059884   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:12 GMT
	I1025 17:46:12.059982   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:12.060267   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:12.060274   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:12.060279   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:12.060284   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:12.062617   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:12.062627   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:12.062632   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:12.062637   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:12.062645   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:12.062653   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:12.062658   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:12 GMT
	I1025 17:46:12.062663   66547 round_trippers.go:580]     Audit-Id: 9f9018fb-f9da-4ffe-ae03-831547ab52c3
	I1025 17:46:12.062714   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:12.553490   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:12.553502   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:12.553509   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:12.553514   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:12.556087   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:12.556100   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:12.556106   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:12 GMT
	I1025 17:46:12.556116   66547 round_trippers.go:580]     Audit-Id: 73e43628-53b2-4953-a87b-bbe171b78108
	I1025 17:46:12.556121   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:12.556126   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:12.556131   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:12.556136   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:12.556224   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:12.556500   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:12.556513   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:12.556527   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:12.556537   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:12.559214   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:12.559224   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:12.559230   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:12 GMT
	I1025 17:46:12.559235   66547 round_trippers.go:580]     Audit-Id: f8768872-c339-45b8-837c-95ad5ee29477
	I1025 17:46:12.559239   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:12.559245   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:12.559249   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:12.559254   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:12.559316   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:13.054989   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:13.055006   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:13.055014   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:13.055021   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:13.058178   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:13.058189   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:13.058195   66547 round_trippers.go:580]     Audit-Id: 8b316948-7112-442c-a6ce-289a0ee21e6e
	I1025 17:46:13.058199   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:13.058203   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:13.058207   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:13.058212   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:13.058216   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:13 GMT
	I1025 17:46:13.058353   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:13.058613   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:13.058621   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:13.058627   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:13.058632   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:13.061234   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:13.061247   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:13.061254   66547 round_trippers.go:580]     Audit-Id: b970d2a4-573a-4ded-902d-1609df520a57
	I1025 17:46:13.061258   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:13.061263   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:13.061268   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:13.061273   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:13.061277   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:13 GMT
	I1025 17:46:13.061326   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:13.553861   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:13.553873   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:13.553880   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:13.553885   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:13.556689   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:13.556704   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:13.556710   66547 round_trippers.go:580]     Audit-Id: 7aeda20d-0237-4068-ba38-71d551d5a3be
	I1025 17:46:13.556716   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:13.556722   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:13.556726   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:13.556731   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:13.556736   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:13 GMT
	I1025 17:46:13.556823   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:13.557074   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:13.557081   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:13.557086   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:13.557091   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:13.559445   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:13.559454   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:13.559459   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:13.559463   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:13.559468   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:13.559476   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:13 GMT
	I1025 17:46:13.559482   66547 round_trippers.go:580]     Audit-Id: 843ca625-3c1e-40e6-871b-62fe676745bc
	I1025 17:46:13.559486   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:13.559536   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:14.053913   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:14.053936   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:14.053949   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:14.053959   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:14.058222   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:14.058236   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:14.058241   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:14.058246   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:14.058253   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:14.058260   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:14 GMT
	I1025 17:46:14.058265   66547 round_trippers.go:580]     Audit-Id: 865d1ff4-d333-4d45-87d2-ebc2dd54b3a0
	I1025 17:46:14.058275   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:14.058344   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:14.058585   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:14.058596   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:14.058602   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:14.058607   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:14.061173   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:14.061183   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:14.061190   66547 round_trippers.go:580]     Audit-Id: 9fd6e6d7-3d8a-4a15-be25-57b70fde0432
	I1025 17:46:14.061194   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:14.061199   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:14.061204   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:14.061215   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:14.061221   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:14 GMT
	I1025 17:46:14.061278   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:14.061470   66547 pod_ready.go:102] pod "etcd-functional-188000" in "kube-system" namespace has status "Ready":"False"
	I1025 17:46:14.554852   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:14.554872   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:14.554883   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:14.554892   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:14.559391   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:14.559401   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:14.559412   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:14.559418   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:14.559423   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:14.559428   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:14 GMT
	I1025 17:46:14.559439   66547 round_trippers.go:580]     Audit-Id: ea0b7054-23d0-4329-87ba-1943779cd292
	I1025 17:46:14.559444   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:14.559523   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"397","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6290 chars]
	I1025 17:46:14.559773   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:14.559779   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:14.559785   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:14.559790   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:14.562293   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:14.562303   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:14.562308   66547 round_trippers.go:580]     Audit-Id: f78bdd81-304f-4dcd-a6d8-f34685ea99ee
	I1025 17:46:14.562313   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:14.562318   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:14.562323   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:14.562328   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:14.562334   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:14 GMT
	I1025 17:46:14.562383   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.053819   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/etcd-functional-188000
	I1025 17:46:15.053839   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.053851   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.053861   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.058579   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:15.058589   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.058595   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.058599   66547 round_trippers.go:580]     Audit-Id: 74abae54-baab-406a-94ee-1aa8a7b116bc
	I1025 17:46:15.058604   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.058609   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.058613   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.058618   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.058688   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-188000","namespace":"kube-system","uid":"095a6b2c-e973-4dad-9409-01e79c7e3021","resourceVersion":"469","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.mirror":"884ed00cd2aaa3b4f518197dc5a844ef","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6066 chars]
	I1025 17:46:15.058951   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.058957   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.058963   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.058968   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.061366   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.061379   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.061388   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.061397   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.061403   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.061408   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.061412   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.061418   66547 round_trippers.go:580]     Audit-Id: bd320442-f000-417a-9183-1a9852c5d3d6
	I1025 17:46:15.061516   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.061707   66547 pod_ready.go:92] pod "etcd-functional-188000" in "kube-system" namespace has status "Ready":"True"
	I1025 17:46:15.061714   66547 pod_ready.go:81] duration metric: took 13.022175482s waiting for pod "etcd-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.061724   66547 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.061755   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-188000
	I1025 17:46:15.061760   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.061765   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.061771   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.063940   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.063949   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.063954   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.063959   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.063969   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.063974   66547 round_trippers.go:580]     Audit-Id: 850cde05-a797-40a3-80c3-1eae0db57c8d
	I1025 17:46:15.063979   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.063984   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.064056   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-188000","namespace":"kube-system","uid":"6811c037-9ba7-49b2-9dc8-e7c835a205ee","resourceVersion":"460","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"0f3f9f77e1fc8a12cf1621823498272c","kubernetes.io/config.mirror":"0f3f9f77e1fc8a12cf1621823498272c","kubernetes.io/config.seen":"2023-10-26T00:45:19.375266605Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8448 chars]
	I1025 17:46:15.064322   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.064329   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.064334   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.064340   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.066647   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.066656   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.066686   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.066692   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.066697   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.066701   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.066706   66547 round_trippers.go:580]     Audit-Id: 9ef039f5-0771-4f52-8e52-5a9f73ca43ba
	I1025 17:46:15.066711   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.066774   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.066939   66547 pod_ready.go:92] pod "kube-apiserver-functional-188000" in "kube-system" namespace has status "Ready":"True"
	I1025 17:46:15.066945   66547 pod_ready.go:81] duration metric: took 5.216443ms waiting for pod "kube-apiserver-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.066951   66547 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.066983   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-188000
	I1025 17:46:15.066988   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.066993   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.066998   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.069468   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.069477   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.069482   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.069487   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.069492   66547 round_trippers.go:580]     Audit-Id: cb573095-3c2f-44e0-bf68-53836dcd873d
	I1025 17:46:15.069496   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.069501   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.069506   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.069579   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-188000","namespace":"kube-system","uid":"000afba9-c176-4b7f-9674-24c20b7b1e92","resourceVersion":"465","creationTimestamp":"2023-10-26T00:45:17Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1a5cba45956bd26c7fcaab9a2058286e","kubernetes.io/config.mirror":"1a5cba45956bd26c7fcaab9a2058286e","kubernetes.io/config.seen":"2023-10-26T00:45:13.501886918Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8021 chars]
	I1025 17:46:15.069836   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.069843   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.069852   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.069858   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.072107   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.072117   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.072123   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.072128   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.072133   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.072138   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.072142   66547 round_trippers.go:580]     Audit-Id: 119d9906-4821-48a4-ae88-535d42729f96
	I1025 17:46:15.072147   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.072212   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.072414   66547 pod_ready.go:92] pod "kube-controller-manager-functional-188000" in "kube-system" namespace has status "Ready":"True"
	I1025 17:46:15.072428   66547 pod_ready.go:81] duration metric: took 5.469911ms waiting for pod "kube-controller-manager-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.072443   66547 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bnvpn" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.072484   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-proxy-bnvpn
	I1025 17:46:15.072490   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.072496   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.072500   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.074917   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.074926   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.074932   66547 round_trippers.go:580]     Audit-Id: decd6794-8dbf-4ba6-9cc7-185c7f37c6e6
	I1025 17:46:15.074937   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.074943   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.074948   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.074953   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.074957   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.075015   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bnvpn","generateName":"kube-proxy-","namespace":"kube-system","uid":"35c2ae14-426f-4a44-b88e-d3d88befe16f","resourceVersion":"389","creationTimestamp":"2023-10-26T00:45:32Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"85b94970-c74c-4b8b-b6dd-957621f9c519","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85b94970-c74c-4b8b-b6dd-957621f9c519\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I1025 17:46:15.075250   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.075257   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.075262   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.075270   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.077786   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.077795   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.077801   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.077805   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.077811   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.077816   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.077821   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.077826   66547 round_trippers.go:580]     Audit-Id: db80ddc9-6928-4174-9761-32404555f696
	I1025 17:46:15.077887   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.078057   66547 pod_ready.go:92] pod "kube-proxy-bnvpn" in "kube-system" namespace has status "Ready":"True"
	I1025 17:46:15.078063   66547 pod_ready.go:81] duration metric: took 5.613315ms waiting for pod "kube-proxy-bnvpn" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.078069   66547 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:15.078101   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000
	I1025 17:46:15.078105   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.078111   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.078116   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.080390   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.080399   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.080408   66547 round_trippers.go:580]     Audit-Id: 2b4ac5f4-3133-4ae4-b836-0ce5fdc80192
	I1025 17:46:15.080414   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.080431   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.080445   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.080450   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.080455   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.080507   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-188000","namespace":"kube-system","uid":"ac7541cf-a304-4933-acea-37c4f53f6710","resourceVersion":"398","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.mirror":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270310Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5147 chars]
	I1025 17:46:15.080758   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.080765   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.080773   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.080779   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.083185   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.083194   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.083199   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.083204   66547 round_trippers.go:580]     Audit-Id: 152f905b-e7d4-409d-beab-daf3a108a2b2
	I1025 17:46:15.083209   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.083218   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.083223   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.083228   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.083284   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.254423   66547 request.go:629] Waited for 170.893265ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000
	I1025 17:46:15.254487   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000
	I1025 17:46:15.254497   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.254508   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.254519   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.259058   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:15.259074   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.259080   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.259084   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.259089   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.259093   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.259098   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.259103   66547 round_trippers.go:580]     Audit-Id: 4da5ad3d-18de-4358-a529-387ddf94b3f3
	I1025 17:46:15.259178   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-188000","namespace":"kube-system","uid":"ac7541cf-a304-4933-acea-37c4f53f6710","resourceVersion":"398","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.mirror":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270310Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5147 chars]
	I1025 17:46:15.453865   66547 request.go:629] Waited for 194.435263ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.453956   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.453969   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.453979   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.453987   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.457506   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:15.457520   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.457532   66547 round_trippers.go:580]     Audit-Id: 2a313e14-6655-461e-b539-60e84dd16088
	I1025 17:46:15.457537   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.457542   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.457546   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.457551   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.457556   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.457615   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:15.960076   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000
	I1025 17:46:15.960098   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.960109   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.960119   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.964346   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:15.964355   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.964361   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.964365   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.964370   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.964375   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.964379   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.964388   66547 round_trippers.go:580]     Audit-Id: 02705c23-9c69-4515-b19d-610797ec5736
	I1025 17:46:15.964467   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-188000","namespace":"kube-system","uid":"ac7541cf-a304-4933-acea-37c4f53f6710","resourceVersion":"398","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.mirror":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270310Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5147 chars]
	I1025 17:46:15.964698   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:15.964705   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:15.964710   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:15.964717   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:15.967202   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:15.967212   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:15.967217   66547 round_trippers.go:580]     Audit-Id: 511daa23-5a0a-4beb-9b76-08ade497efb7
	I1025 17:46:15.967223   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:15.967228   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:15.967232   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:15.967238   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:15.967243   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:15 GMT
	I1025 17:46:15.967452   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:16.458028   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000
	I1025 17:46:16.458040   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:16.458047   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:16.458054   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:16.460857   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:16.460869   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:16.460875   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:16 GMT
	I1025 17:46:16.460880   66547 round_trippers.go:580]     Audit-Id: 7f3abe07-377a-4f8a-9316-2ef068c87158
	I1025 17:46:16.460884   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:16.460889   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:16.460894   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:16.460898   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:16.460954   66547 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-188000","namespace":"kube-system","uid":"ac7541cf-a304-4933-acea-37c4f53f6710","resourceVersion":"474","creationTimestamp":"2023-10-26T00:45:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.mirror":"5b69b95f77dea85816490ff8f86d59b3","kubernetes.io/config.seen":"2023-10-26T00:45:19.375270310Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4903 chars]
	I1025 17:46:16.461192   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes/functional-188000
	I1025 17:46:16.461200   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:16.461208   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:16.461214   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:16.463625   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:16.463635   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:16.463640   66547 round_trippers.go:580]     Audit-Id: 2617ab88-28c3-4631-a4b7-6e8f25540de6
	I1025 17:46:16.463645   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:16.463651   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:16.463655   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:16.463664   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:16.463674   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:16 GMT
	I1025 17:46:16.463731   66547 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2023-10-26T00:45:16Z","fieldsType":"FieldsV1", [truncated 4791 chars]
	I1025 17:46:16.463906   66547 pod_ready.go:92] pod "kube-scheduler-functional-188000" in "kube-system" namespace has status "Ready":"True"
	I1025 17:46:16.463915   66547 pod_ready.go:81] duration metric: took 1.385799554s waiting for pod "kube-scheduler-functional-188000" in "kube-system" namespace to be "Ready" ...
	I1025 17:46:16.463926   66547 pod_ready.go:38] duration metric: took 14.438569915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 17:46:16.463940   66547 api_server.go:52] waiting for apiserver process to appear ...
	I1025 17:46:16.463993   66547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 17:46:16.475108   66547 command_runner.go:130] > 5688
	I1025 17:46:16.475797   66547 api_server.go:72] duration metric: took 14.632584237s to wait for apiserver process to appear ...
	I1025 17:46:16.475805   66547 api_server.go:88] waiting for apiserver healthz status ...
	I1025 17:46:16.475815   66547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56239/healthz ...
	I1025 17:46:16.481676   66547 api_server.go:279] https://127.0.0.1:56239/healthz returned 200:
	ok
	I1025 17:46:16.481717   66547 round_trippers.go:463] GET https://127.0.0.1:56239/version
	I1025 17:46:16.481723   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:16.481729   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:16.481736   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:16.483200   66547 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 17:46:16.483209   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:16.483215   66547 round_trippers.go:580]     Content-Length: 264
	I1025 17:46:16.483248   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:16 GMT
	I1025 17:46:16.483254   66547 round_trippers.go:580]     Audit-Id: 851d77ee-df52-4077-8ba6-cfdbb13cf5d2
	I1025 17:46:16.483265   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:16.483270   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:16.483274   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:16.483280   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:16.483296   66547 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1025 17:46:16.483327   66547 api_server.go:141] control plane version: v1.28.3
	I1025 17:46:16.483334   66547 api_server.go:131] duration metric: took 7.524769ms to wait for apiserver health ...
	I1025 17:46:16.483339   66547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 17:46:16.483372   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods
	I1025 17:46:16.483376   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:16.483381   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:16.483389   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:16.486264   66547 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 17:46:16.486275   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:16.486281   66547 round_trippers.go:580]     Audit-Id: cb0a122d-d77e-4443-a0f7-e7365750745f
	I1025 17:46:16.486285   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:16.486289   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:16.486295   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:16.486302   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:16.486310   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:16 GMT
	I1025 17:46:16.487576   66547 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"474"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ff5ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7022509e-429b-40a1-95e2-ac3b980b2b1e","resourceVersion":"395","creationTimestamp":"2023-10-26T00:45:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ef2c2cc4-097f-444f-b52c-dfc3304565b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef2c2cc4-097f-444f-b52c-dfc3304565b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49690 chars]
	I1025 17:46:16.488734   66547 system_pods.go:59] 7 kube-system pods found
	I1025 17:46:16.488744   66547 system_pods.go:61] "coredns-5dd5756b68-ff5ll" [7022509e-429b-40a1-95e2-ac3b980b2b1e] Running
	I1025 17:46:16.488748   66547 system_pods.go:61] "etcd-functional-188000" [095a6b2c-e973-4dad-9409-01e79c7e3021] Running
	I1025 17:46:16.488752   66547 system_pods.go:61] "kube-apiserver-functional-188000" [6811c037-9ba7-49b2-9dc8-e7c835a205ee] Running
	I1025 17:46:16.488756   66547 system_pods.go:61] "kube-controller-manager-functional-188000" [000afba9-c176-4b7f-9674-24c20b7b1e92] Running
	I1025 17:46:16.488764   66547 system_pods.go:61] "kube-proxy-bnvpn" [35c2ae14-426f-4a44-b88e-d3d88befe16f] Running
	I1025 17:46:16.488769   66547 system_pods.go:61] "kube-scheduler-functional-188000" [ac7541cf-a304-4933-acea-37c4f53f6710] Running
	I1025 17:46:16.488773   66547 system_pods.go:61] "storage-provisioner" [6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f] Running
	I1025 17:46:16.488776   66547 system_pods.go:74] duration metric: took 5.432289ms to wait for pod list to return data ...
	I1025 17:46:16.488782   66547 default_sa.go:34] waiting for default service account to be created ...
	I1025 17:46:16.655930   66547 request.go:629] Waited for 167.068857ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56239/api/v1/namespaces/default/serviceaccounts
	I1025 17:46:16.656009   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/default/serviceaccounts
	I1025 17:46:16.656016   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:16.656024   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:16.656032   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:16.659126   66547 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 17:46:16.659137   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:16.659143   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:16.659148   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:16.659154   66547 round_trippers.go:580]     Content-Length: 261
	I1025 17:46:16.659158   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:16 GMT
	I1025 17:46:16.659162   66547 round_trippers.go:580]     Audit-Id: c0126d31-0283-48fd-981d-f5b6d435fd2a
	I1025 17:46:16.659167   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:16.659174   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:16.659186   66547 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"474"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d159d953-5483-49f6-8b51-7f76441cc765","resourceVersion":"289","creationTimestamp":"2023-10-26T00:45:31Z"}}]}
	I1025 17:46:16.659314   66547 default_sa.go:45] found service account: "default"
	I1025 17:46:16.659322   66547 default_sa.go:55] duration metric: took 170.531208ms for default service account to be created ...
	I1025 17:46:16.659329   66547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 17:46:16.854498   66547 request.go:629] Waited for 195.005453ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods
	I1025 17:46:16.854571   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/namespaces/kube-system/pods
	I1025 17:46:16.854582   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:16.854593   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:16.854605   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:16.859796   66547 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 17:46:16.859809   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:16.859815   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:16.859819   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:16.859824   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:16.859828   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:16.859833   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:16 GMT
	I1025 17:46:16.859842   66547 round_trippers.go:580]     Audit-Id: 5d5f778f-c41f-4ae0-a9c4-f39a601188a6
	I1025 17:46:16.860185   66547 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"474"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ff5ll","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"7022509e-429b-40a1-95e2-ac3b980b2b1e","resourceVersion":"395","creationTimestamp":"2023-10-26T00:45:32Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ef2c2cc4-097f-444f-b52c-dfc3304565b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T00:45:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef2c2cc4-097f-444f-b52c-dfc3304565b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49690 chars]
	I1025 17:46:16.861321   66547 system_pods.go:86] 7 kube-system pods found
	I1025 17:46:16.861330   66547 system_pods.go:89] "coredns-5dd5756b68-ff5ll" [7022509e-429b-40a1-95e2-ac3b980b2b1e] Running
	I1025 17:46:16.861334   66547 system_pods.go:89] "etcd-functional-188000" [095a6b2c-e973-4dad-9409-01e79c7e3021] Running
	I1025 17:46:16.861338   66547 system_pods.go:89] "kube-apiserver-functional-188000" [6811c037-9ba7-49b2-9dc8-e7c835a205ee] Running
	I1025 17:46:16.861342   66547 system_pods.go:89] "kube-controller-manager-functional-188000" [000afba9-c176-4b7f-9674-24c20b7b1e92] Running
	I1025 17:46:16.861345   66547 system_pods.go:89] "kube-proxy-bnvpn" [35c2ae14-426f-4a44-b88e-d3d88befe16f] Running
	I1025 17:46:16.861350   66547 system_pods.go:89] "kube-scheduler-functional-188000" [ac7541cf-a304-4933-acea-37c4f53f6710] Running
	I1025 17:46:16.861353   66547 system_pods.go:89] "storage-provisioner" [6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f] Running
	I1025 17:46:16.861358   66547 system_pods.go:126] duration metric: took 202.018505ms to wait for k8s-apps to be running ...
	I1025 17:46:16.861363   66547 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 17:46:16.861414   66547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 17:46:16.873132   66547 system_svc.go:56] duration metric: took 11.763743ms WaitForService to wait for kubelet.
	I1025 17:46:16.873146   66547 kubeadm.go:581] duration metric: took 15.02992274s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 17:46:16.873158   66547 node_conditions.go:102] verifying NodePressure condition ...
	I1025 17:46:17.054055   66547 request.go:629] Waited for 180.8478ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56239/api/v1/nodes
	I1025 17:46:17.054105   66547 round_trippers.go:463] GET https://127.0.0.1:56239/api/v1/nodes
	I1025 17:46:17.054139   66547 round_trippers.go:469] Request Headers:
	I1025 17:46:17.054273   66547 round_trippers.go:473]     Accept: application/json, */*
	I1025 17:46:17.054292   66547 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 17:46:17.058336   66547 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 17:46:17.058347   66547 round_trippers.go:577] Response Headers:
	I1025 17:46:17.058353   66547 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b027f849-de75-4dac-a593-5b1469f286b0
	I1025 17:46:17.058357   66547 round_trippers.go:580]     Date: Thu, 26 Oct 2023 00:46:17 GMT
	I1025 17:46:17.058362   66547 round_trippers.go:580]     Audit-Id: a5713e87-40d7-43b6-9c61-1bb63a4a2784
	I1025 17:46:17.058374   66547 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 17:46:17.058379   66547 round_trippers.go:580]     Content-Type: application/json
	I1025 17:46:17.058384   66547 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76e6af54-9d18-4124-b2cb-558a6d0bbf54
	I1025 17:46:17.058441   66547 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"474"},"items":[{"metadata":{"name":"functional-188000","uid":"4c294de5-9313-4d5b-b9d8-4844db374a8b","resourceVersion":"384","creationTimestamp":"2023-10-26T00:45:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-188000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"functional-188000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T17_45_19_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4844 chars]
	I1025 17:46:17.058663   66547 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 17:46:17.058675   66547 node_conditions.go:123] node cpu capacity is 12
	I1025 17:46:17.058685   66547 node_conditions.go:105] duration metric: took 185.517472ms to run NodePressure ...
	I1025 17:46:17.058692   66547 start.go:228] waiting for startup goroutines ...
	I1025 17:46:17.058697   66547 start.go:233] waiting for cluster config update ...
	I1025 17:46:17.058708   66547 start.go:242] writing updated cluster config ...
	I1025 17:46:17.058999   66547 ssh_runner.go:195] Run: rm -f paused
	I1025 17:46:17.098129   66547 start.go:600] kubectl: 1.27.2, cluster: 1.28.3 (minor skew: 1)
	I1025 17:46:17.131085   66547 out.go:177] * Done! kubectl is now configured to use "functional-188000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Oct 26 00:45:51 functional-188000 cri-dockerd[4689]: time="2023-10-26T00:45:51Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 26 00:45:51 functional-188000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Oct 26 00:45:51 functional-188000 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Oct 26 00:45:51 functional-188000 systemd[1]: cri-docker.service: Deactivated successfully.
	Oct 26 00:45:51 functional-188000 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Oct 26 00:45:51 functional-188000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Start docker client with request timeout 0s"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Loaded network plugin cni"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Docker cri networking managed by network plugin cni"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Docker Info: &{ID:68cb55e9-3c82-4216-a56e-ae91fcc0c943 Containers:14 ContainersRunning:0 ContainersPaused:0 ContainersStopped:14 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:35 SystemTime:2023-10-26T00:45:51.538922738Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 KernelVersion:6.4.16-linuxkit OperatingSystem:
Ubuntu 22.04.3 LTS OSVersion:22.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0000ce230 NCPU:12 MemTotal:6227828736 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:functional-188000 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense
: DefaultAddressPools:[] Warnings:[]}"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Setting cgroupDriver cgroupfs"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Oct 26 00:45:51 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:51Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 26 00:45:51 functional-188000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5f18f6726713c225b033534e3b5f28ba842f91579806ad1d533a77c48a35cc20/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/73f778a7c4041980a802143f23147c72daead76bace354ee12338d2664f533ad/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4a4bc70f7327ec61234ddaf949266c43749e9aa7244880110cbb75b815a88b9f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/52d6a2732be9d158703e4d9b2adc05c58188b6e0fe375bc1332711e2a6aa9ba5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9dfe51b5073325f5ba2cc1b45fd812a87d8fba60716c34dee564ee01c3d53a02/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/559b9a278dba392d83f546119ba1fbdb9d79aa4041d57c4d2c3a5243195064d8/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 cri-dockerd[4777]: time="2023-10-26T00:45:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c29298a9a01a00b04a2372723fc93bcf9a28f2909c24e0e2f2a8fdbbd36d2c8d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 00:45:57 functional-188000 dockerd[4476]: time="2023-10-26T00:45:57.845413519Z" level=info msg="ignoring event" container=556e0913a4194f07cf0a6c6a9b4ddec0df633530cf7e3a9870b1a67c0a39079f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c500b713ece17       6e38f40d628db       18 seconds ago       Running             storage-provisioner       2                   73f778a7c4041       storage-provisioner
	c9aa983994347       ead0a4a53df89       36 seconds ago       Running             coredns                   1                   c29298a9a01a0       coredns-5dd5756b68-ff5ll
	97bbb1430ec1f       bfc896cf80fba       36 seconds ago       Running             kube-proxy                1                   559b9a278dba3       kube-proxy-bnvpn
	c51c8d65b5703       10baa1ca17068       36 seconds ago       Running             kube-controller-manager   1                   9dfe51b507332       kube-controller-manager-functional-188000
	de0914a73beb4       6d1b4fd1b182d       36 seconds ago       Running             kube-scheduler            1                   52d6a2732be9d       kube-scheduler-functional-188000
	0b21c9816561f       5374347291230       36 seconds ago       Running             kube-apiserver            1                   4a4bc70f7327e       kube-apiserver-functional-188000
	556e0913a4194       6e38f40d628db       36 seconds ago       Exited              storage-provisioner       1                   73f778a7c4041       storage-provisioner
	3c8adea9036e4       73deb9a3f7025       36 seconds ago       Running             etcd                      1                   5f18f6726713c       etcd-functional-188000
	af12fd91d5bf2       ead0a4a53df89       59 seconds ago       Exited              coredns                   0                   8422dfd027437       coredns-5dd5756b68-ff5ll
	a43456fe6b21c       bfc896cf80fba       About a minute ago   Exited              kube-proxy                0                   5e341bbd6ea5e       kube-proxy-bnvpn
	274ded1e50f28       6d1b4fd1b182d       About a minute ago   Exited              kube-scheduler            0                   43612e5ea242d       kube-scheduler-functional-188000
	3e2bf17527f5a       10baa1ca17068       About a minute ago   Exited              kube-controller-manager   0                   118d5ec425047       kube-controller-manager-functional-188000
	f6135c3690fc6       5374347291230       About a minute ago   Exited              kube-apiserver            0                   f4161cfa72653       kube-apiserver-functional-188000
	0f623e5f8d417       73deb9a3f7025       About a minute ago   Exited              etcd                      0                   11c2daa128776       etcd-functional-188000
	
	* 
	* ==> coredns [af12fd91d5bf] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [c9aa98399434] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56233 - 60901 "HINFO IN 9099018579167008431.5841343290163082259. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.066366848s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-188000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-188000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=functional-188000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T17_45_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Oct 2023 00:45:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-188000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 26 Oct 2023 00:46:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Oct 2023 00:46:20 +0000   Thu, 26 Oct 2023 00:45:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Oct 2023 00:46:20 +0000   Thu, 26 Oct 2023 00:45:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Oct 2023 00:46:20 +0000   Thu, 26 Oct 2023 00:45:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Oct 2023 00:46:20 +0000   Thu, 26 Oct 2023 00:45:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-188000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7fe4125713c4e90ad2ec45d2a9bca5f
	  System UUID:                d7fe4125713c4e90ad2ec45d2a9bca5f
	  Boot ID:                    97028b5e-c1fe-46d5-abb1-881a12fedf72
	  Kernel Version:             6.4.16-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-ff5ll                     100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     62s
	  kube-system                 etcd-functional-188000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         75s
	  kube-system                 kube-apiserver-functional-188000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-functional-188000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-bnvpn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-functional-188000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (6%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 60s   kube-proxy       
	  Normal  Starting                 33s   kube-proxy       
	  Normal  Starting                 75s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  75s   kubelet          Node functional-188000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s   kubelet          Node functional-188000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s   kubelet          Node functional-188000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s   node-controller  Node functional-188000 event: Registered Node functional-188000 in Controller
	  Normal  RegisteredNode           21s   node-controller  Node functional-188000 event: Registered Node functional-188000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.002920] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:07.0: PCI INT A: no GSI
	[  +0.002075] virtio-pci 0000:00:08.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:08.0: PCI INT A: no GSI
	[  +0.004650] virtio-pci 0000:00:09.0: can't derive routing for PCI INT A
	[  +0.000002] virtio-pci 0000:00:09.0: PCI INT A: no GSI
	[  +0.005011] virtio-pci 0000:00:0a.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0a.0: PCI INT A: no GSI
	[  +0.001909] virtio-pci 0000:00:0b.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0b.0: PCI INT A: no GSI
	[  +0.005014] virtio-pci 0000:00:0c.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0c.0: PCI INT A: no GSI
	[  +0.000255] virtio-pci 0000:00:0d.0: can't derive routing for PCI INT A
	[  +0.000000] virtio-pci 0000:00:0d.0: PCI INT A: no GSI
	[  +0.003210] virtio-pci 0000:00:0e.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0e.0: PCI INT A: no GSI
	[  +0.007936] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
	[  +0.025214] lpc_ich 0000:00:1f.0: No MFD cells added
	[  +0.006812] fail to initialize ptp_kvm
	[  +0.000001] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.756658] netlink: 'rc.init': attribute type 22 has an invalid length.
	[  +0.007092] 3[378]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.199399] FAT-fs (loop0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
	[  +0.000376] FAT-fs (loop0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
	[  +0.016213] grpcfuse: loading out-of-tree module taints kernel.
	
	* 
	* ==> etcd [0f623e5f8d41] <==
	* {"level":"info","ts":"2023-10-26T00:45:15.346577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:15.346583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:15.346589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:15.346594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:15.347535Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:45:15.348204Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-188000 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-26T00:45:15.348248Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T00:45:15.348414Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:45:15.348478Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:45:15.348647Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:45:15.34827Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T00:45:15.348705Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-26T00:45:15.348734Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-26T00:45:15.349529Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-26T00:45:15.349756Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-26T00:45:40.226127Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-26T00:45:40.226204Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-188000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2023-10-26T00:45:40.226349Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-26T00:45:40.226537Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-26T00:45:40.237057Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-26T00:45:40.237132Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-26T00:45:40.237178Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2023-10-26T00:45:40.25251Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-26T00:45:40.252602Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-26T00:45:40.252609Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-188000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [3c8adea9036e] <==
	* {"level":"info","ts":"2023-10-26T00:45:57.823698Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-26T00:45:57.823719Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-26T00:45:57.824037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-10-26T00:45:57.824131Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-10-26T00:45:57.824371Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:45:57.824437Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T00:45:57.831352Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-26T00:45:57.831434Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-26T00:45:57.830729Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-26T00:45:57.832001Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-26T00:45:57.832059Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-26T00:45:59.235146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:59.235262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:59.235312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-26T00:45:59.235466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2023-10-26T00:45:59.235493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-10-26T00:45:59.235508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2023-10-26T00:45:59.23552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-10-26T00:45:59.23711Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-188000 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-26T00:45:59.237176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T00:45:59.237422Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T00:45:59.238048Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-26T00:45:59.238657Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-26T00:45:59.238926Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-26T00:45:59.23897Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  00:46:34 up 9 min,  0 users,  load average: 0.29, 0.40, 0.22
	Linux functional-188000 6.4.16-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Oct 10 20:42:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [0b21c9816561] <==
	* I1026 00:46:00.348070       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I1026 00:46:00.347870       1 controller.go:116] Starting legacy_token_tracking_controller
	I1026 00:46:00.348097       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I1026 00:46:00.348157       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I1026 00:46:00.348201       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1026 00:46:00.348372       1 aggregator.go:164] waiting for initial CRD sync...
	I1026 00:46:00.348683       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1026 00:46:00.348842       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1026 00:46:00.348118       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1026 00:46:00.523071       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 00:46:00.523085       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 00:46:00.523095       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 00:46:00.523102       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1026 00:46:00.523243       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 00:46:00.523292       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1026 00:46:00.523313       1 aggregator.go:166] initial CRD sync complete...
	I1026 00:46:00.523319       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 00:46:00.523324       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 00:46:00.523329       1 cache.go:39] Caches are synced for autoregister controller
	I1026 00:46:00.523617       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1026 00:46:00.523893       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 00:46:00.528536       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 00:46:01.351769       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 00:46:13.389432       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 00:46:13.438940       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [f6135c3690fc] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 00:45:50.197334       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 00:45:50.225475       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 00:45:50.228317       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [3e2bf17527f5] <==
	* I1026 00:45:31.831943       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1026 00:45:31.840012       1 shared_informer.go:318] Caches are synced for endpoint
	I1026 00:45:31.877400       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 00:45:31.880753       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1026 00:45:31.885421       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 00:45:31.904126       1 shared_informer.go:318] Caches are synced for persistent volume
	I1026 00:45:32.152779       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1026 00:45:32.231876       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1026 00:45:32.386707       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 00:45:32.386800       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 00:45:32.401582       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 00:45:32.627409       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bnvpn"
	I1026 00:45:32.826622       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-7kd6b"
	I1026 00:45:32.831631       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-ff5ll"
	I1026 00:45:32.846087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="692.840701ms"
	I1026 00:45:32.852950       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-7kd6b"
	I1026 00:45:32.930715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.594365ms"
	I1026 00:45:32.937151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.368905ms"
	I1026 00:45:32.937253       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.642µs"
	I1026 00:45:34.959284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.838µs"
	I1026 00:45:34.967834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.283µs"
	I1026 00:45:34.972519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.729µs"
	I1026 00:45:34.975513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="169.966µs"
	I1026 00:45:34.988537       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.407348ms"
	I1026 00:45:34.988642       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.094µs"
	
	* 
	* ==> kube-controller-manager [c51c8d65b570] <==
	* I1026 00:46:13.337661       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1026 00:46:13.337768       1 shared_informer.go:318] Caches are synced for expand
	I1026 00:46:13.338324       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1026 00:46:13.338391       1 shared_informer.go:318] Caches are synced for endpoint
	I1026 00:46:13.338354       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1026 00:46:13.338920       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1026 00:46:13.338937       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1026 00:46:13.346929       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1026 00:46:13.354418       1 shared_informer.go:318] Caches are synced for disruption
	I1026 00:46:13.356116       1 shared_informer.go:318] Caches are synced for taint
	I1026 00:46:13.356431       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1026 00:46:13.356565       1 taint_manager.go:211] "Sending events to api server"
	I1026 00:46:13.356582       1 event.go:307] "Event occurred" object="functional-188000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-188000 event: Registered Node functional-188000 in Controller"
	I1026 00:46:13.356461       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1026 00:46:13.356691       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-188000"
	I1026 00:46:13.356829       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1026 00:46:13.389380       1 shared_informer.go:318] Caches are synced for stateful set
	I1026 00:46:13.394395       1 shared_informer.go:318] Caches are synced for HPA
	I1026 00:46:13.440027       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 00:46:13.468783       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 00:46:13.482770       1 shared_informer.go:318] Caches are synced for namespace
	I1026 00:46:13.487654       1 shared_informer.go:318] Caches are synced for service account
	I1026 00:46:13.852918       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 00:46:13.888067       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 00:46:13.888111       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [97bbb1430ec1] <==
	* I1026 00:45:58.046567       1 server_others.go:69] "Using iptables proxy"
	E1026 00:45:58.124171       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-188000": dial tcp 192.168.49.2:8441: connect: connection refused
	I1026 00:46:00.523152       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1026 00:46:00.634467       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 00:46:00.637637       1 server_others.go:152] "Using iptables Proxier"
	I1026 00:46:00.637707       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 00:46:00.637714       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 00:46:00.637736       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 00:46:00.638196       1 server.go:846] "Version info" version="v1.28.3"
	I1026 00:46:00.638351       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:46:00.639088       1 config.go:188] "Starting service config controller"
	I1026 00:46:00.639120       1 config.go:315] "Starting node config controller"
	I1026 00:46:00.639132       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 00:46:00.639133       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 00:46:00.640110       1 config.go:97] "Starting endpoint slice config controller"
	I1026 00:46:00.640179       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 00:46:00.740159       1 shared_informer.go:318] Caches are synced for node config
	I1026 00:46:00.740204       1 shared_informer.go:318] Caches are synced for service config
	I1026 00:46:00.741388       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [a43456fe6b21] <==
	* I1026 00:45:33.930364       1 server_others.go:69] "Using iptables proxy"
	I1026 00:45:33.942189       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1026 00:45:34.038701       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 00:45:34.041189       1 server_others.go:152] "Using iptables Proxier"
	I1026 00:45:34.041282       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 00:45:34.041292       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 00:45:34.041312       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 00:45:34.041753       1 server.go:846] "Version info" version="v1.28.3"
	I1026 00:45:34.041812       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:45:34.042730       1 config.go:97] "Starting endpoint slice config controller"
	I1026 00:45:34.042795       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 00:45:34.042824       1 config.go:188] "Starting service config controller"
	I1026 00:45:34.042841       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 00:45:34.047275       1 config.go:315] "Starting node config controller"
	I1026 00:45:34.047631       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 00:45:34.144114       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 00:45:34.145540       1 shared_informer.go:318] Caches are synced for service config
	I1026 00:45:34.148075       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [274ded1e50f2] <==
	* E1026 00:45:16.836752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1026 00:45:16.836224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 00:45:16.836764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1026 00:45:16.836303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 00:45:16.836786       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1026 00:45:16.836345       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1026 00:45:16.836837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1026 00:45:16.836397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 00:45:16.836853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1026 00:45:16.836434       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 00:45:16.836873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1026 00:45:16.837028       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1026 00:45:16.837063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1026 00:45:16.837087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1026 00:45:16.837204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1026 00:45:17.663006       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1026 00:45:17.663088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1026 00:45:17.799733       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1026 00:45:17.799778       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 00:45:17.801879       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 00:45:17.801926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1026 00:45:17.823058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1026 00:45:17.823115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1026 00:45:19.432586       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1026 00:45:40.189318       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [de0914a73beb] <==
	* I1026 00:45:58.548357       1 serving.go:348] Generated self-signed cert in-memory
	I1026 00:46:00.527074       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1026 00:46:00.527138       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:46:00.533885       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1026 00:46:00.533939       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1026 00:46:00.534005       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 00:46:00.534017       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 00:46:00.534029       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 00:46:00.534111       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1026 00:46:00.534420       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 00:46:00.534470       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 00:46:00.635108       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1026 00:46:00.635159       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 00:46:00.635125       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.934142    2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dfe51b5073325f5ba2cc1b45fd812a87d8fba60716c34dee564ee01c3d53a02"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.935331    2496 status_manager.go:853] "Failed to get status for pod" podUID="0f3f9f77e1fc8a12cf1621823498272c" pod="kube-system/kube-apiserver-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.935526    2496 status_manager.go:853] "Failed to get status for pod" podUID="35c2ae14-426f-4a44-b88e-d3d88befe16f" pod="kube-system/kube-proxy-bnvpn" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-bnvpn\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.935689    2496 status_manager.go:853] "Failed to get status for pod" podUID="7022509e-429b-40a1-95e2-ac3b980b2b1e" pod="kube-system/coredns-5dd5756b68-ff5ll" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ff5ll\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.935818    2496 status_manager.go:853] "Failed to get status for pod" podUID="6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.935937    2496 status_manager.go:853] "Failed to get status for pod" podUID="1a5cba45956bd26c7fcaab9a2058286e" pod="kube-system/kube-controller-manager-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.936061    2496 status_manager.go:853] "Failed to get status for pod" podUID="884ed00cd2aaa3b4f518197dc5a844ef" pod="kube-system/etcd-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:57 functional-188000 kubelet[2496]: I1026 00:45:57.936173    2496 status_manager.go:853] "Failed to get status for pod" podUID="5b69b95f77dea85816490ff8f86d59b3" pod="kube-system/kube-scheduler-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.027715    2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a4bc70f7327ec61234ddaf949266c43749e9aa7244880110cbb75b815a88b9f"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.028555    2496 status_manager.go:853] "Failed to get status for pod" podUID="35c2ae14-426f-4a44-b88e-d3d88befe16f" pod="kube-system/kube-proxy-bnvpn" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-bnvpn\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.029326    2496 status_manager.go:853] "Failed to get status for pod" podUID="7022509e-429b-40a1-95e2-ac3b980b2b1e" pod="kube-system/coredns-5dd5756b68-ff5ll" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ff5ll\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.030672    2496 status_manager.go:853] "Failed to get status for pod" podUID="6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.031069    2496 status_manager.go:853] "Failed to get status for pod" podUID="1a5cba45956bd26c7fcaab9a2058286e" pod="kube-system/kube-controller-manager-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.031381    2496 status_manager.go:853] "Failed to get status for pod" podUID="884ed00cd2aaa3b4f518197dc5a844ef" pod="kube-system/etcd-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.031772    2496 status_manager.go:853] "Failed to get status for pod" podUID="5b69b95f77dea85816490ff8f86d59b3" pod="kube-system/kube-scheduler-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.031959    2496 status_manager.go:853] "Failed to get status for pod" podUID="0f3f9f77e1fc8a12cf1621823498272c" pod="kube-system/kube-apiserver-functional-188000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-188000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.148811    2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c29298a9a01a00b04a2372723fc93bcf9a28f2909c24e0e2f2a8fdbbd36d2c8d"
	Oct 26 00:45:58 functional-188000 kubelet[2496]: I1026 00:45:58.233984    2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="559b9a278dba392d83f546119ba1fbdb9d79aa4041d57c4d2c3a5243195064d8"
	Oct 26 00:45:59 functional-188000 kubelet[2496]: I1026 00:45:59.334479    2496 scope.go:117] "RemoveContainer" containerID="acd3650135af374f4320e0d6bcd857120933741c11ca50532f0fb03830938045"
	Oct 26 00:45:59 functional-188000 kubelet[2496]: I1026 00:45:59.334754    2496 scope.go:117] "RemoveContainer" containerID="556e0913a4194f07cf0a6c6a9b4ddec0df633530cf7e3a9870b1a67c0a39079f"
	Oct 26 00:45:59 functional-188000 kubelet[2496]: E1026 00:45:59.335043    2496 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f)\"" pod="kube-system/storage-provisioner" podUID="6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f"
	Oct 26 00:46:00 functional-188000 kubelet[2496]: E1026 00:46:00.364718    2496 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Oct 26 00:46:00 functional-188000 kubelet[2496]: I1026 00:46:00.530690    2496 scope.go:117] "RemoveContainer" containerID="556e0913a4194f07cf0a6c6a9b4ddec0df633530cf7e3a9870b1a67c0a39079f"
	Oct 26 00:46:00 functional-188000 kubelet[2496]: E1026 00:46:00.531052    2496 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f)\"" pod="kube-system/storage-provisioner" podUID="6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f"
	Oct 26 00:46:15 functional-188000 kubelet[2496]: I1026 00:46:15.435655    2496 scope.go:117] "RemoveContainer" containerID="556e0913a4194f07cf0a6c6a9b4ddec0df633530cf7e3a9870b1a67c0a39079f"
	
	* 
	* ==> storage-provisioner [556e0913a419] <==
	* I1026 00:45:57.743158       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 00:45:57.745843       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [c500b713ece1] <==
	* I1026 00:46:15.508682       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 00:46:15.531510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 00:46:15.531595       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 00:46:32.951020       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 00:46:32.951413       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-188000_069b3832-8bdf-4a3d-b362-a470c08bf0d5!
	I1026 00:46:32.951504       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a715413-f24a-499f-968f-dfcb18dc2444", APIVersion:"v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-188000_069b3832-8bdf-4a3d-b362-a470c08bf0d5 became leader
	I1026 00:46:33.052036       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-188000_069b3832-8bdf-4a3d-b362-a470c08bf0d5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-188000 -n functional-188000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-188000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (5.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (263.14s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-207000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1025 17:51:51.108463   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:52:18.826180   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:52:35.242698   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:52:35.269980   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:52:35.282186   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:52:35.304421   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:52:35.346698   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:52:35.428243   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:52:35.589214   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:52:35.910717   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:52:36.553108   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:52:37.833353   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:52:40.395791   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:52:45.517715   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:52:55.759139   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:53:16.242005   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:53:57.204557   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-207000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m23.097457599s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-207000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-207000 in cluster ingress-addon-legacy-207000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 17:50:01.109179   68230 out.go:296] Setting OutFile to fd 1 ...
	I1025 17:50:01.109456   68230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:50:01.109462   68230 out.go:309] Setting ErrFile to fd 2...
	I1025 17:50:01.109466   68230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:50:01.109646   68230 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 17:50:01.111157   68230 out.go:303] Setting JSON to false
	I1025 17:50:01.133115   68230 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":31769,"bootTime":1698249632,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 17:50:01.133230   68230 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 17:50:01.155670   68230 out.go:177] * [ingress-addon-legacy-207000] minikube v1.31.2 on Darwin 14.0
	I1025 17:50:01.198614   68230 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 17:50:01.198770   68230 notify.go:220] Checking for updates...
	I1025 17:50:01.242226   68230 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:50:01.265551   68230 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 17:50:01.287586   68230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:50:01.309303   68230 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 17:50:01.330407   68230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 17:50:01.352941   68230 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 17:50:01.412254   68230 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 17:50:01.412392   68230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:50:01.513654   68230 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-26 00:50:01.502403485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:50:01.556352   68230 out.go:177] * Using the docker driver based on user configuration
	I1025 17:50:01.578372   68230 start.go:298] selected driver: docker
	I1025 17:50:01.578397   68230 start.go:902] validating driver "docker" against <nil>
	I1025 17:50:01.578413   68230 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 17:50:01.582840   68230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:50:01.685076   68230 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-26 00:50:01.674222533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:50:01.685264   68230 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 17:50:01.685470   68230 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 17:50:01.706755   68230 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 17:50:01.728683   68230 cni.go:84] Creating CNI manager for ""
	I1025 17:50:01.728723   68230 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 17:50:01.728739   68230 start_flags.go:323] config:
	{Name:ingress-addon-legacy-207000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-207000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:50:01.772496   68230 out.go:177] * Starting control plane node ingress-addon-legacy-207000 in cluster ingress-addon-legacy-207000
	I1025 17:50:01.793680   68230 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 17:50:01.815672   68230 out.go:177] * Pulling base image ...
	I1025 17:50:01.857760   68230 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1025 17:50:01.857862   68230 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 17:50:01.912126   68230 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 17:50:01.912151   68230 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 17:50:01.917583   68230 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1025 17:50:01.917596   68230 cache.go:56] Caching tarball of preloaded images
	I1025 17:50:01.917778   68230 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1025 17:50:01.937777   68230 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1025 17:50:01.980650   68230 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1025 17:50:02.063164   68230 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1025 17:50:07.036207   68230 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1025 17:50:07.036399   68230 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1025 17:50:07.671606   68230 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1025 17:50:07.671886   68230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/config.json ...
	I1025 17:50:07.671912   68230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/config.json: {Name:mk5489530c4f6a4a27a91a47463e3375a90fcfbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:50:07.672204   68230 cache.go:194] Successfully downloaded all kic artifacts
	I1025 17:50:07.672235   68230 start.go:365] acquiring machines lock for ingress-addon-legacy-207000: {Name:mk1dc3c519d60b7948e50f04f5b98928b4605155 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 17:50:07.672329   68230 start.go:369] acquired machines lock for "ingress-addon-legacy-207000" in 86.801µs
	I1025 17:50:07.672349   68230 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-207000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-207000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 17:50:07.672396   68230 start.go:125] createHost starting for "" (driver="docker")
	I1025 17:50:07.698274   68230 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 17:50:07.698672   68230 start.go:159] libmachine.API.Create for "ingress-addon-legacy-207000" (driver="docker")
	I1025 17:50:07.698718   68230 client.go:168] LocalClient.Create starting
	I1025 17:50:07.698895   68230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem
	I1025 17:50:07.698981   68230 main.go:141] libmachine: Decoding PEM data...
	I1025 17:50:07.699014   68230 main.go:141] libmachine: Parsing certificate...
	I1025 17:50:07.699106   68230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem
	I1025 17:50:07.699168   68230 main.go:141] libmachine: Decoding PEM data...
	I1025 17:50:07.699185   68230 main.go:141] libmachine: Parsing certificate...
	I1025 17:50:07.719472   68230 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-207000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 17:50:07.773303   68230 cli_runner.go:211] docker network inspect ingress-addon-legacy-207000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 17:50:07.773429   68230 network_create.go:281] running [docker network inspect ingress-addon-legacy-207000] to gather additional debugging logs...
	I1025 17:50:07.773455   68230 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-207000
	W1025 17:50:07.825137   68230 cli_runner.go:211] docker network inspect ingress-addon-legacy-207000 returned with exit code 1
	I1025 17:50:07.825173   68230 network_create.go:284] error running [docker network inspect ingress-addon-legacy-207000]: docker network inspect ingress-addon-legacy-207000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-207000 not found
	I1025 17:50:07.825192   68230 network_create.go:286] output of [docker network inspect ingress-addon-legacy-207000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-207000 not found
	
	** /stderr **
	I1025 17:50:07.825349   68230 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 17:50:07.876267   68230 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000915c20}
	I1025 17:50:07.876311   68230 network_create.go:124] attempt to create docker network ingress-addon-legacy-207000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I1025 17:50:07.876393   68230 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-207000 ingress-addon-legacy-207000
	I1025 17:50:07.963672   68230 network_create.go:108] docker network ingress-addon-legacy-207000 192.168.49.0/24 created
	I1025 17:50:07.963711   68230 kic.go:118] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-207000" container
	I1025 17:50:07.963850   68230 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 17:50:08.015474   68230 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-207000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-207000 --label created_by.minikube.sigs.k8s.io=true
	I1025 17:50:08.067577   68230 oci.go:103] Successfully created a docker volume ingress-addon-legacy-207000
	I1025 17:50:08.067694   68230 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-207000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-207000 --entrypoint /usr/bin/test -v ingress-addon-legacy-207000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 17:50:08.453700   68230 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-207000
	I1025 17:50:08.453775   68230 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1025 17:50:08.453794   68230 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 17:50:08.453911   68230 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-207000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 17:50:11.299882   68230 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-207000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (2.845798482s)
	I1025 17:50:11.299906   68230 kic.go:200] duration metric: took 2.846024 seconds to extract preloaded images to volume
	I1025 17:50:11.300023   68230 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 17:50:11.400278   68230 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-207000 --name ingress-addon-legacy-207000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-207000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-207000 --network ingress-addon-legacy-207000 --ip 192.168.49.2 --volume ingress-addon-legacy-207000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 17:50:11.681347   68230 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-207000 --format={{.State.Running}}
	I1025 17:50:11.740311   68230 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-207000 --format={{.State.Status}}
	I1025 17:50:11.801592   68230 cli_runner.go:164] Run: docker exec ingress-addon-legacy-207000 stat /var/lib/dpkg/alternatives/iptables
	I1025 17:50:11.911888   68230 oci.go:144] the created container "ingress-addon-legacy-207000" has a running status.
	I1025 17:50:11.911933   68230 kic.go:222] Creating ssh key for kic: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/ingress-addon-legacy-207000/id_rsa...
	I1025 17:50:12.225117   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/ingress-addon-legacy-207000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 17:50:12.225169   68230 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/ingress-addon-legacy-207000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 17:50:12.291822   68230 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-207000 --format={{.State.Status}}
	I1025 17:50:12.348581   68230 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 17:50:12.348608   68230 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-207000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 17:50:12.449355   68230 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-207000 --format={{.State.Status}}
	I1025 17:50:12.502377   68230 machine.go:88] provisioning docker machine ...
	I1025 17:50:12.502423   68230 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-207000"
	I1025 17:50:12.502523   68230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:50:12.556881   68230 main.go:141] libmachine: Using SSH client type: native
	I1025 17:50:12.557221   68230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56749 <nil> <nil>}
	I1025 17:50:12.557233   68230 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-207000 && echo "ingress-addon-legacy-207000" | sudo tee /etc/hostname
	I1025 17:50:12.690558   68230 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-207000
	
	I1025 17:50:12.690680   68230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:50:12.742933   68230 main.go:141] libmachine: Using SSH client type: native
	I1025 17:50:12.743256   68230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56749 <nil> <nil>}
	I1025 17:50:12.743271   68230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-207000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-207000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-207000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 17:50:12.864755   68230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 17:50:12.864774   68230 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 17:50:12.864793   68230 ubuntu.go:177] setting up certificates
	I1025 17:50:12.864800   68230 provision.go:83] configureAuth start
	I1025 17:50:12.864871   68230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-207000
	I1025 17:50:12.916896   68230 provision.go:138] copyHostCerts
	I1025 17:50:12.916937   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 17:50:12.916989   68230 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 17:50:12.917000   68230 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 17:50:12.917120   68230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 17:50:12.917299   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 17:50:12.917325   68230 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 17:50:12.917330   68230 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 17:50:12.917398   68230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 17:50:12.917534   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 17:50:12.917565   68230 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 17:50:12.917569   68230 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 17:50:12.917638   68230 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 17:50:12.917790   68230 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-207000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-207000]
	I1025 17:50:13.055471   68230 provision.go:172] copyRemoteCerts
	I1025 17:50:13.055525   68230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 17:50:13.055579   68230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:50:13.111505   68230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56749 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/ingress-addon-legacy-207000/id_rsa Username:docker}
	I1025 17:50:13.203501   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 17:50:13.203577   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 17:50:13.226337   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 17:50:13.226411   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1025 17:50:13.249109   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 17:50:13.249192   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 17:50:13.272233   68230 provision.go:86] duration metric: configureAuth took 407.406518ms
	I1025 17:50:13.272248   68230 ubuntu.go:193] setting minikube options for container-runtime
	I1025 17:50:13.272396   68230 config.go:182] Loaded profile config "ingress-addon-legacy-207000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1025 17:50:13.272457   68230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:50:13.326915   68230 main.go:141] libmachine: Using SSH client type: native
	I1025 17:50:13.327317   68230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56749 <nil> <nil>}
	I1025 17:50:13.327331   68230 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 17:50:13.451002   68230 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 17:50:13.451019   68230 ubuntu.go:71] root file system type: overlay
	I1025 17:50:13.451135   68230 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 17:50:13.451220   68230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:50:13.502731   68230 main.go:141] libmachine: Using SSH client type: native
	I1025 17:50:13.503061   68230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56749 <nil> <nil>}
	I1025 17:50:13.503119   68230 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 17:50:13.638759   68230 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 17:50:13.638846   68230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:50:13.691391   68230 main.go:141] libmachine: Using SSH client type: native
	I1025 17:50:13.691719   68230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 56749 <nil> <nil>}
	I1025 17:50:13.691733   68230 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 17:50:14.301746   68230 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 00:50:13.636291938 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 17:50:14.301778   68230 machine.go:91] provisioned docker machine in 1.799320099s
	I1025 17:50:14.301786   68230 client.go:171] LocalClient.Create took 6.602863427s
	I1025 17:50:14.301802   68230 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-207000" took 6.602935185s
	I1025 17:50:14.301812   68230 start.go:300] post-start starting for "ingress-addon-legacy-207000" (driver="docker")
	I1025 17:50:14.301822   68230 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 17:50:14.301901   68230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 17:50:14.302003   68230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:50:14.359149   68230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56749 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/ingress-addon-legacy-207000/id_rsa Username:docker}
	I1025 17:50:14.451869   68230 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 17:50:14.456196   68230 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 17:50:14.456225   68230 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 17:50:14.456234   68230 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 17:50:14.456239   68230 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 17:50:14.456251   68230 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 17:50:14.456346   68230 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 17:50:14.456507   68230 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 17:50:14.456513   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /etc/ssl/certs/652922.pem
	I1025 17:50:14.456711   68230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 17:50:14.465851   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 17:50:14.488757   68230 start.go:303] post-start completed in 186.929415ms
	I1025 17:50:14.489326   68230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-207000
	I1025 17:50:14.543257   68230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/config.json ...
	I1025 17:50:14.543705   68230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 17:50:14.543760   68230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:50:14.595962   68230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56749 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/ingress-addon-legacy-207000/id_rsa Username:docker}
	I1025 17:50:14.682849   68230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 17:50:14.688418   68230 start.go:128] duration metric: createHost completed in 7.015798166s
	I1025 17:50:14.688436   68230 start.go:83] releasing machines lock for "ingress-addon-legacy-207000", held for 7.015889619s
	I1025 17:50:14.688514   68230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-207000
	I1025 17:50:14.740982   68230 ssh_runner.go:195] Run: cat /version.json
	I1025 17:50:14.741012   68230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 17:50:14.741062   68230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:50:14.741092   68230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:50:14.798752   68230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56749 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/ingress-addon-legacy-207000/id_rsa Username:docker}
	I1025 17:50:14.798861   68230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56749 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/ingress-addon-legacy-207000/id_rsa Username:docker}
	I1025 17:50:14.989335   68230 ssh_runner.go:195] Run: systemctl --version
	I1025 17:50:14.994859   68230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 17:50:15.000112   68230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 17:50:15.024839   68230 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 17:50:15.024911   68230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 17:50:15.042691   68230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 17:50:15.059764   68230 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 17:50:15.059778   68230 start.go:472] detecting cgroup driver to use...
	I1025 17:50:15.059792   68230 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 17:50:15.059905   68230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 17:50:15.077543   68230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1025 17:50:15.088684   68230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 17:50:15.098906   68230 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 17:50:15.099000   68230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 17:50:15.109498   68230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 17:50:15.120006   68230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 17:50:15.130756   68230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 17:50:15.141431   68230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 17:50:15.151357   68230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 17:50:15.161895   68230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 17:50:15.171364   68230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 17:50:15.180785   68230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 17:50:15.235544   68230 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 17:50:15.322505   68230 start.go:472] detecting cgroup driver to use...
	I1025 17:50:15.322527   68230 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 17:50:15.322594   68230 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 17:50:15.338364   68230 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 17:50:15.338436   68230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 17:50:15.351926   68230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 17:50:15.370744   68230 ssh_runner.go:195] Run: which cri-dockerd
	I1025 17:50:15.376803   68230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 17:50:15.388686   68230 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 17:50:15.408335   68230 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 17:50:15.506966   68230 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 17:50:15.598536   68230 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 17:50:15.598639   68230 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 17:50:15.616723   68230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 17:50:15.706062   68230 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 17:50:15.956916   68230 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 17:50:15.982516   68230 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 17:50:16.056508   68230 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
	I1025 17:50:16.056683   68230 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-207000 dig +short host.docker.internal
	I1025 17:50:16.200976   68230 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 17:50:16.201075   68230 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 17:50:16.205975   68230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 17:50:16.217709   68230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:50:16.269905   68230 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1025 17:50:16.270000   68230 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 17:50:16.290682   68230 docker.go:693] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1025 17:50:16.290697   68230 docker.go:699] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1025 17:50:16.290756   68230 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 17:50:16.300382   68230 ssh_runner.go:195] Run: which lz4
	I1025 17:50:16.304966   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1025 17:50:16.305078   68230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 17:50:16.309463   68230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 17:50:16.309498   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I1025 17:50:22.352534   68230 docker.go:657] Took 6.047318 seconds to copy over tarball
	I1025 17:50:22.352603   68230 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 17:50:24.398741   68230 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.046060279s)
	I1025 17:50:24.398757   68230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 17:50:24.454599   68230 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 17:50:24.465066   68230 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1025 17:50:24.482128   68230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 17:50:24.542190   68230 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 17:50:25.546667   68230 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.00440485s)
	I1025 17:50:25.546757   68230 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 17:50:25.567528   68230 docker.go:693] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1025 17:50:25.567545   68230 docker.go:699] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1025 17:50:25.567554   68230 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 17:50:25.573188   68230 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1025 17:50:25.573288   68230 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1025 17:50:25.573595   68230 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 17:50:25.575049   68230 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1025 17:50:25.575057   68230 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 17:50:25.575146   68230 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1025 17:50:25.575276   68230 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1025 17:50:25.575449   68230 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1025 17:50:25.580732   68230 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1025 17:50:25.580806   68230 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1025 17:50:25.580824   68230 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 17:50:25.582863   68230 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1025 17:50:25.584085   68230 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 17:50:25.584083   68230 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1025 17:50:25.585518   68230 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1025 17:50:25.585540   68230 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1025 17:50:26.250451   68230 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1025 17:50:26.271425   68230 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1025 17:50:26.271469   68230 docker.go:318] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1025 17:50:26.271528   68230 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1025 17:50:26.293674   68230 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1025 17:50:26.388247   68230 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1025 17:50:26.411070   68230 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1025 17:50:26.411099   68230 docker.go:318] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1025 17:50:26.411155   68230 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1025 17:50:26.432157   68230 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1025 17:50:27.043586   68230 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1025 17:50:27.064838   68230 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1025 17:50:27.064868   68230 docker.go:318] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1025 17:50:27.064918   68230 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1025 17:50:27.085644   68230 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1025 17:50:27.091631   68230 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 17:50:27.340333   68230 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 17:50:27.361680   68230 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1025 17:50:27.361706   68230 docker.go:318] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 17:50:27.361758   68230 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1025 17:50:27.384469   68230 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1025 17:50:27.645687   68230 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1025 17:50:27.669438   68230 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1025 17:50:27.669471   68230 docker.go:318] Removing image: registry.k8s.io/coredns:1.6.7
	I1025 17:50:27.669538   68230 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1025 17:50:27.693223   68230 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1025 17:50:27.975121   68230 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1025 17:50:27.997360   68230 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1025 17:50:27.997433   68230 docker.go:318] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1025 17:50:27.997493   68230 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1025 17:50:28.019930   68230 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1025 17:50:28.293028   68230 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1025 17:50:28.314523   68230 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1025 17:50:28.314556   68230 docker.go:318] Removing image: registry.k8s.io/pause:3.2
	I1025 17:50:28.314625   68230 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1025 17:50:28.333624   68230 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1025 17:50:28.333667   68230 cache_images.go:92] LoadImages completed in 2.766021377s
	W1025 17:50:28.333713   68230 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I1025 17:50:28.333789   68230 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 17:50:28.385433   68230 cni.go:84] Creating CNI manager for ""
	I1025 17:50:28.385452   68230 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 17:50:28.385468   68230 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 17:50:28.385483   68230 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-207000 NodeName:ingress-addon-legacy-207000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1025 17:50:28.385589   68230 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-207000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 17:50:28.385663   68230 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-207000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-207000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 17:50:28.385721   68230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1025 17:50:28.395514   68230 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 17:50:28.395578   68230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 17:50:28.404864   68230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1025 17:50:28.422046   68230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1025 17:50:28.439517   68230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1025 17:50:28.456772   68230 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 17:50:28.461198   68230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 17:50:28.472914   68230 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000 for IP: 192.168.49.2
	I1025 17:50:28.472945   68230 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:50:28.473120   68230 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 17:50:28.473179   68230 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 17:50:28.473221   68230 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/client.key
	I1025 17:50:28.473242   68230 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/client.crt with IP's: []
	I1025 17:50:28.518797   68230 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/client.crt ...
	I1025 17:50:28.518805   68230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/client.crt: {Name:mkd0eaa9da5b45305541acf2d0efce12dc3f1184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:50:28.519112   68230 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/client.key ...
	I1025 17:50:28.519120   68230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/client.key: {Name:mkafeb5efc79f3dc24efd9e59838f0ecae291c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:50:28.519332   68230 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.key.dd3b5fb2
	I1025 17:50:28.519347   68230 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 17:50:28.599390   68230 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.crt.dd3b5fb2 ...
	I1025 17:50:28.599399   68230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.crt.dd3b5fb2: {Name:mkfb1ab11928bb3cafa196407523856a25196a89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:50:28.599648   68230 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.key.dd3b5fb2 ...
	I1025 17:50:28.599672   68230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.key.dd3b5fb2: {Name:mk9442869c9826fe34c84e15de7e1185c2713cbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:50:28.599872   68230 certs.go:337] copying /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.crt
	I1025 17:50:28.600119   68230 certs.go:341] copying /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.key
	I1025 17:50:28.600352   68230 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/proxy-client.key
	I1025 17:50:28.600367   68230 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/proxy-client.crt with IP's: []
	I1025 17:50:28.663950   68230 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/proxy-client.crt ...
	I1025 17:50:28.663958   68230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/proxy-client.crt: {Name:mk675c60e8f8ce15dece1076c92cb6b15cd302aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:50:28.664228   68230 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/proxy-client.key ...
	I1025 17:50:28.664245   68230 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/proxy-client.key: {Name:mk8b84e4194ab7c7dab66b46d199d16813dd2b9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:50:28.664491   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 17:50:28.664515   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 17:50:28.664557   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 17:50:28.664608   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 17:50:28.664625   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 17:50:28.664654   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 17:50:28.664668   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 17:50:28.664710   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 17:50:28.664838   68230 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 17:50:28.664879   68230 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 17:50:28.664890   68230 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 17:50:28.664950   68230 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 17:50:28.664994   68230 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 17:50:28.665027   68230 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 17:50:28.665082   68230 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 17:50:28.665142   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:50:28.665159   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem -> /usr/share/ca-certificates/65292.pem
	I1025 17:50:28.665175   68230 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /usr/share/ca-certificates/652922.pem
	I1025 17:50:28.665642   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 17:50:28.689752   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 17:50:28.713252   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 17:50:28.736749   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/ingress-addon-legacy-207000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 17:50:28.760825   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 17:50:28.783725   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 17:50:28.806603   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 17:50:28.829633   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 17:50:28.852491   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 17:50:28.875813   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 17:50:28.898694   68230 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 17:50:28.921648   68230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 17:50:28.938749   68230 ssh_runner.go:195] Run: openssl version
	I1025 17:50:28.944773   68230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 17:50:28.955129   68230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:50:28.959828   68230 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:50:28.959873   68230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 17:50:28.966816   68230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 17:50:28.976998   68230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 17:50:28.987068   68230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 17:50:28.991674   68230 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 17:50:28.991717   68230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 17:50:28.998991   68230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 17:50:29.009504   68230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 17:50:29.019648   68230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 17:50:29.024321   68230 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 17:50:29.024372   68230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 17:50:29.031773   68230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 17:50:29.041996   68230 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 17:50:29.046850   68230 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 17:50:29.046898   68230 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-207000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-207000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:50:29.046983   68230 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 17:50:29.066571   68230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 17:50:29.076256   68230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 17:50:29.085480   68230 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 17:50:29.085541   68230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 17:50:29.094914   68230 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 17:50:29.094938   68230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 17:50:29.146088   68230 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1025 17:50:29.146153   68230 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 17:50:29.402862   68230 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 17:50:29.402950   68230 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 17:50:29.403026   68230 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 17:50:29.586497   68230 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 17:50:29.587201   68230 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 17:50:29.587239   68230 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 17:50:29.667984   68230 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 17:50:29.711290   68230 out.go:204]   - Generating certificates and keys ...
	I1025 17:50:29.711373   68230 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 17:50:29.711452   68230 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 17:50:29.807018   68230 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 17:50:29.860902   68230 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 17:50:30.210312   68230 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 17:50:30.305751   68230 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 17:50:30.437796   68230 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 17:50:30.437927   68230 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-207000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 17:50:30.651413   68230 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 17:50:30.651522   68230 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-207000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 17:50:30.737852   68230 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 17:50:30.824196   68230 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 17:50:30.988853   68230 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 17:50:30.989005   68230 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 17:50:31.357620   68230 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 17:50:31.428935   68230 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 17:50:31.565463   68230 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 17:50:31.689461   68230 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 17:50:31.690066   68230 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 17:50:31.711502   68230 out.go:204]   - Booting up control plane ...
	I1025 17:50:31.711627   68230 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 17:50:31.711720   68230 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 17:50:31.711814   68230 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 17:50:31.711924   68230 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 17:50:31.712099   68230 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 17:51:11.700837   68230 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1025 17:51:11.701850   68230 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 17:51:11.702069   68230 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 17:51:16.703891   68230 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 17:51:16.704099   68230 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 17:51:26.706282   68230 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 17:51:26.706489   68230 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 17:51:46.708342   68230 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 17:51:46.708564   68230 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 17:52:26.711325   68230 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 17:52:26.711507   68230 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 17:52:26.711518   68230 kubeadm.go:322] 
	I1025 17:52:26.711559   68230 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1025 17:52:26.711630   68230 kubeadm.go:322] 		timed out waiting for the condition
	I1025 17:52:26.711650   68230 kubeadm.go:322] 
	I1025 17:52:26.711693   68230 kubeadm.go:322] 	This error is likely caused by:
	I1025 17:52:26.711732   68230 kubeadm.go:322] 		- The kubelet is not running
	I1025 17:52:26.711865   68230 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 17:52:26.711887   68230 kubeadm.go:322] 
	I1025 17:52:26.712053   68230 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 17:52:26.712124   68230 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1025 17:52:26.712179   68230 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1025 17:52:26.712191   68230 kubeadm.go:322] 
	I1025 17:52:26.712355   68230 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 17:52:26.712481   68230 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 17:52:26.712497   68230 kubeadm.go:322] 
	I1025 17:52:26.712572   68230 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1025 17:52:26.712617   68230 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1025 17:52:26.712689   68230 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1025 17:52:26.712740   68230 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1025 17:52:26.712745   68230 kubeadm.go:322] 
	I1025 17:52:26.714626   68230 kubeadm.go:322] W1026 00:50:29.145270    1711 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1025 17:52:26.714784   68230 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1025 17:52:26.714883   68230 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1025 17:52:26.715033   68230 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I1025 17:52:26.715114   68230 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 17:52:26.715216   68230 kubeadm.go:322] W1026 00:50:31.693785    1711 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 17:52:26.715316   68230 kubeadm.go:322] W1026 00:50:31.694584    1711 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 17:52:26.715386   68230 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 17:52:26.715456   68230 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W1025 17:52:26.715559   68230 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-207000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-207000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 00:50:29.145270    1711 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 00:50:31.693785    1711 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 00:50:31.694584    1711 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-207000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-207000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 00:50:29.145270    1711 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 00:50:31.693785    1711 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 00:50:31.694584    1711 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1025 17:52:26.715596   68230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1025 17:52:27.129840   68230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 17:52:27.142156   68230 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 17:52:27.142213   68230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 17:52:27.151395   68230 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 17:52:27.151421   68230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 17:52:27.202537   68230 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1025 17:52:27.202623   68230 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 17:52:27.456378   68230 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 17:52:27.456469   68230 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 17:52:27.456544   68230 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 17:52:27.643446   68230 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 17:52:27.644167   68230 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 17:52:27.644210   68230 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 17:52:27.721767   68230 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 17:52:27.743267   68230 out.go:204]   - Generating certificates and keys ...
	I1025 17:52:27.743351   68230 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 17:52:27.743419   68230 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 17:52:27.743491   68230 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 17:52:27.743563   68230 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1025 17:52:27.743623   68230 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 17:52:27.743718   68230 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1025 17:52:27.743820   68230 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1025 17:52:27.743895   68230 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1025 17:52:27.744020   68230 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 17:52:27.744114   68230 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 17:52:27.744150   68230 kubeadm.go:322] [certs] Using the existing "sa" key
	I1025 17:52:27.744192   68230 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 17:52:27.982529   68230 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 17:52:28.102734   68230 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 17:52:28.214213   68230 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 17:52:28.550365   68230 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 17:52:28.551011   68230 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 17:52:28.572592   68230 out.go:204]   - Booting up control plane ...
	I1025 17:52:28.572693   68230 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 17:52:28.572799   68230 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 17:52:28.572875   68230 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 17:52:28.572958   68230 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 17:52:28.573133   68230 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 17:53:08.561619   68230 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1025 17:53:08.562471   68230 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 17:53:08.562701   68230 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 17:53:13.564237   68230 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 17:53:13.564453   68230 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 17:53:23.566005   68230 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 17:53:23.566175   68230 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 17:53:43.568774   68230 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 17:53:43.569005   68230 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 17:54:23.571923   68230 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 17:54:23.572139   68230 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 17:54:23.572156   68230 kubeadm.go:322] 
	I1025 17:54:23.572210   68230 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1025 17:54:23.572258   68230 kubeadm.go:322] 		timed out waiting for the condition
	I1025 17:54:23.572274   68230 kubeadm.go:322] 
	I1025 17:54:23.572317   68230 kubeadm.go:322] 	This error is likely caused by:
	I1025 17:54:23.572384   68230 kubeadm.go:322] 		- The kubelet is not running
	I1025 17:54:23.572523   68230 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 17:54:23.572537   68230 kubeadm.go:322] 
	I1025 17:54:23.572653   68230 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 17:54:23.572687   68230 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1025 17:54:23.572723   68230 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1025 17:54:23.572733   68230 kubeadm.go:322] 
	I1025 17:54:23.572839   68230 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 17:54:23.572947   68230 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 17:54:23.572962   68230 kubeadm.go:322] 
	I1025 17:54:23.573087   68230 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1025 17:54:23.573142   68230 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1025 17:54:23.573215   68230 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1025 17:54:23.573282   68230 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1025 17:54:23.573289   68230 kubeadm.go:322] 
	I1025 17:54:23.575286   68230 kubeadm.go:322] W1026 00:52:27.200923    4766 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1025 17:54:23.575501   68230 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1025 17:54:23.575578   68230 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1025 17:54:23.575724   68230 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
	I1025 17:54:23.575816   68230 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 17:54:23.575927   68230 kubeadm.go:322] W1026 00:52:28.555355    4766 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 17:54:23.576037   68230 kubeadm.go:322] W1026 00:52:28.556130    4766 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 17:54:23.576111   68230 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 17:54:23.576181   68230 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1025 17:54:23.576208   68230 kubeadm.go:406] StartCluster complete in 3m54.522281457s
	I1025 17:54:23.576299   68230 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 17:54:23.596866   68230 logs.go:284] 0 containers: []
	W1025 17:54:23.596880   68230 logs.go:286] No container was found matching "kube-apiserver"
	I1025 17:54:23.596956   68230 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 17:54:23.618947   68230 logs.go:284] 0 containers: []
	W1025 17:54:23.618961   68230 logs.go:286] No container was found matching "etcd"
	I1025 17:54:23.619043   68230 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 17:54:23.640512   68230 logs.go:284] 0 containers: []
	W1025 17:54:23.640527   68230 logs.go:286] No container was found matching "coredns"
	I1025 17:54:23.640593   68230 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 17:54:23.661447   68230 logs.go:284] 0 containers: []
	W1025 17:54:23.661460   68230 logs.go:286] No container was found matching "kube-scheduler"
	I1025 17:54:23.661532   68230 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 17:54:23.699548   68230 logs.go:284] 0 containers: []
	W1025 17:54:23.699562   68230 logs.go:286] No container was found matching "kube-proxy"
	I1025 17:54:23.699641   68230 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 17:54:23.720888   68230 logs.go:284] 0 containers: []
	W1025 17:54:23.720902   68230 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 17:54:23.720984   68230 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 17:54:23.740734   68230 logs.go:284] 0 containers: []
	W1025 17:54:23.740749   68230 logs.go:286] No container was found matching "kindnet"
	I1025 17:54:23.740756   68230 logs.go:123] Gathering logs for kubelet ...
	I1025 17:54:23.740763   68230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 17:54:23.778417   68230 logs.go:123] Gathering logs for dmesg ...
	I1025 17:54:23.778435   68230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 17:54:23.792288   68230 logs.go:123] Gathering logs for describe nodes ...
	I1025 17:54:23.792302   68230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 17:54:23.848062   68230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 17:54:23.848078   68230 logs.go:123] Gathering logs for Docker ...
	I1025 17:54:23.848085   68230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 17:54:23.864865   68230 logs.go:123] Gathering logs for container status ...
	I1025 17:54:23.864879   68230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1025 17:54:23.918617   68230 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 00:52:27.200923    4766 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 00:52:28.555355    4766 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 00:52:28.556130    4766 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1025 17:54:23.918639   68230 out.go:239] * 
	* 
	W1025 17:54:23.918687   68230 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 00:52:27.200923    4766 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 00:52:28.555355    4766 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 00:52:28.556130    4766 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 00:52:27.200923    4766 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 00:52:28.555355    4766 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 00:52:28.556130    4766 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 17:54:23.918709   68230 out.go:239] * 
	* 
	W1025 17:54:23.919361   68230 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 17:54:23.985257   68230 out.go:177] 
	W1025 17:54:24.029373   68230 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 00:52:27.200923    4766 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 00:52:28.555355    4766 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 00:52:28.556130    4766 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1026 00:52:27.200923    4766 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1026 00:52:28.555355    4766 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1026 00:52:28.556130    4766 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 17:54:24.029452   68230 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1025 17:54:24.029488   68230 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1025 17:54:24.073122   68230 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-207000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (263.14s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (96.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-207000 addons enable ingress --alsologtostderr -v=5
E1025 17:55:19.127980   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-207000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m35.946880346s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 17:54:24.253494   68484 out.go:296] Setting OutFile to fd 1 ...
	I1025 17:54:24.254663   68484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:54:24.254671   68484 out.go:309] Setting ErrFile to fd 2...
	I1025 17:54:24.254676   68484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:54:24.254853   68484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 17:54:24.255194   68484 mustload.go:65] Loading cluster: ingress-addon-legacy-207000
	I1025 17:54:24.255493   68484 config.go:182] Loaded profile config "ingress-addon-legacy-207000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1025 17:54:24.255520   68484 addons.go:594] checking whether the cluster is paused
	I1025 17:54:24.255600   68484 config.go:182] Loaded profile config "ingress-addon-legacy-207000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1025 17:54:24.255616   68484 host.go:66] Checking if "ingress-addon-legacy-207000" exists ...
	I1025 17:54:24.256006   68484 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-207000 --format={{.State.Status}}
	I1025 17:54:24.307386   68484 ssh_runner.go:195] Run: systemctl --version
	I1025 17:54:24.307480   68484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:54:24.359677   68484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56749 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/ingress-addon-legacy-207000/id_rsa Username:docker}
	I1025 17:54:24.444699   68484 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 17:54:24.485997   68484 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1025 17:54:24.506700   68484 config.go:182] Loaded profile config "ingress-addon-legacy-207000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1025 17:54:24.506710   68484 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-207000"
	I1025 17:54:24.506716   68484 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-207000"
	I1025 17:54:24.506744   68484 host.go:66] Checking if "ingress-addon-legacy-207000" exists ...
	I1025 17:54:24.507045   68484 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-207000 --format={{.State.Status}}
	I1025 17:54:24.579911   68484 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1025 17:54:24.600978   68484 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1025 17:54:24.621719   68484 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1025 17:54:24.642906   68484 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1025 17:54:24.664457   68484 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 17:54:24.664487   68484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1025 17:54:24.664619   68484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:54:24.719454   68484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56749 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/ingress-addon-legacy-207000/id_rsa Username:docker}
	I1025 17:54:24.814034   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:54:24.869516   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:24.869547   68484 retry.go:31] will retry after 177.172085ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:25.048954   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:54:25.105818   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:25.105838   68484 retry.go:31] will retry after 194.362044ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:25.301330   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:54:25.355091   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:25.355110   68484 retry.go:31] will retry after 695.279651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:26.050631   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:54:26.107016   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:26.107034   68484 retry.go:31] will retry after 833.975179ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:26.941477   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:54:26.997836   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:26.997856   68484 retry.go:31] will retry after 1.01257075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:28.010798   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:54:28.066643   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:28.066665   68484 retry.go:31] will retry after 1.579177106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:29.646107   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:54:29.703212   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:29.703238   68484 retry.go:31] will retry after 3.692483929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:33.396052   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:54:33.452128   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:33.452145   68484 retry.go:31] will retry after 3.179855458s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:36.632307   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:54:36.687220   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:36.687240   68484 retry.go:31] will retry after 4.934095613s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:41.622129   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:54:41.680128   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:41.680148   68484 retry.go:31] will retry after 7.788790684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:49.471421   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:54:49.529724   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:54:49.529742   68484 retry.go:31] will retry after 16.437810459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:55:05.968299   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:55:06.025029   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:55:06.025051   68484 retry.go:31] will retry after 15.138526614s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:55:21.164323   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:55:21.221516   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:55:21.221534   68484 retry.go:31] will retry after 38.726322804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:55:59.949609   68484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1025 17:56:00.006105   68484 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:00.006132   68484 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-207000"
	I1025 17:56:00.027953   68484 out.go:177] * Verifying ingress addon...
	I1025 17:56:00.071672   68484 out.go:177] 
	W1025 17:56:00.092923   68484 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-207000" does not exist: client config: context "ingress-addon-legacy-207000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-207000" does not exist: client config: context "ingress-addon-legacy-207000" does not exist]
	W1025 17:56:00.092952   68484 out.go:239] * 
	* 
	W1025 17:56:00.101494   68484 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 17:56:00.122617   68484 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-207000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-207000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af",
	        "Created": "2023-10-26T00:50:11.45218617Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 54004,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T00:50:11.672937643Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af/hostname",
	        "HostsPath": "/var/lib/docker/containers/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af/hosts",
	        "LogPath": "/var/lib/docker/containers/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af-json.log",
	        "Name": "/ingress-addon-legacy-207000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-207000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-207000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b29294a152213f924cd256e4d9c1607396bbf63cd0cfb740ad6be1bc2590dbc1-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b29294a152213f924cd256e4d9c1607396bbf63cd0cfb740ad6be1bc2590dbc1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b29294a152213f924cd256e4d9c1607396bbf63cd0cfb740ad6be1bc2590dbc1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b29294a152213f924cd256e4d9c1607396bbf63cd0cfb740ad6be1bc2590dbc1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-207000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-207000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-207000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-207000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-207000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fa617ca84be928fb641d71e507687001df9d969cb373b11229a32157d4e239bc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56749"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56750"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56751"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56752"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56753"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fa617ca84be9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-207000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "20ad9a1940d7",
	                        "ingress-addon-legacy-207000"
	                    ],
	                    "NetworkID": "68e6ff35ac40bf16032258f0ef4ab598b319b9c2951115a614f605d485cb3257",
	                    "EndpointID": "d011ca93176698fbf61e9ac3f2fadfeda70560f9b4e87e87b20dedac50e67f8b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-207000 -n ingress-addon-legacy-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-207000 -n ingress-addon-legacy-207000: exit status 6 (399.180481ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 17:56:00.589515   68510 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-207000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-207000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (96.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (88.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-207000 addons enable ingress-dns --alsologtostderr -v=5
E1025 17:56:51.104561   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-207000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m28.389309405s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 17:56:00.658372   68520 out.go:296] Setting OutFile to fd 1 ...
	I1025 17:56:00.659387   68520 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:56:00.659396   68520 out.go:309] Setting ErrFile to fd 2...
	I1025 17:56:00.659400   68520 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:56:00.659573   68520 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 17:56:00.659917   68520 mustload.go:65] Loading cluster: ingress-addon-legacy-207000
	I1025 17:56:00.660211   68520 config.go:182] Loaded profile config "ingress-addon-legacy-207000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1025 17:56:00.660230   68520 addons.go:594] checking whether the cluster is paused
	I1025 17:56:00.660309   68520 config.go:182] Loaded profile config "ingress-addon-legacy-207000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1025 17:56:00.660325   68520 host.go:66] Checking if "ingress-addon-legacy-207000" exists ...
	I1025 17:56:00.660701   68520 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-207000 --format={{.State.Status}}
	I1025 17:56:00.711745   68520 ssh_runner.go:195] Run: systemctl --version
	I1025 17:56:00.711830   68520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:56:00.763179   68520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56749 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/ingress-addon-legacy-207000/id_rsa Username:docker}
	I1025 17:56:00.850677   68520 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 17:56:00.893158   68520 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1025 17:56:00.915408   68520 config.go:182] Loaded profile config "ingress-addon-legacy-207000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1025 17:56:00.915434   68520 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-207000"
	I1025 17:56:00.915445   68520 addons.go:231] Setting addon ingress-dns=true in "ingress-addon-legacy-207000"
	I1025 17:56:00.915509   68520 host.go:66] Checking if "ingress-addon-legacy-207000" exists ...
	I1025 17:56:00.916077   68520 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-207000 --format={{.State.Status}}
	I1025 17:56:00.989912   68520 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1025 17:56:01.012416   68520 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I1025 17:56:01.034614   68520 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 17:56:01.034657   68520 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I1025 17:56:01.034798   68520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-207000
	I1025 17:56:01.086820   68520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56749 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/ingress-addon-legacy-207000/id_rsa Username:docker}
	I1025 17:56:01.183191   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:56:01.237163   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:01.237188   68520 retry.go:31] will retry after 214.540022ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:01.452840   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:56:01.508645   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:01.508665   68520 retry.go:31] will retry after 478.334047ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:01.989367   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:56:02.046357   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:02.046384   68520 retry.go:31] will retry after 828.14636ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:02.874903   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:56:02.934898   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:02.934924   68520 retry.go:31] will retry after 822.682576ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:03.759093   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:56:03.814426   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:03.814444   68520 retry.go:31] will retry after 1.188887335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:05.004253   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:56:05.059105   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:05.059131   68520 retry.go:31] will retry after 1.999448149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:07.059667   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:56:07.115074   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:07.115104   68520 retry.go:31] will retry after 3.343250773s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:10.459626   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:56:10.515994   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:10.516011   68520 retry.go:31] will retry after 2.72763296s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:13.245942   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:56:13.301730   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:13.301748   68520 retry.go:31] will retry after 6.014551209s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:19.312183   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:56:19.365991   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:19.366008   68520 retry.go:31] will retry after 9.381003013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:28.743801   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:56:28.799725   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:28.799742   68520 retry.go:31] will retry after 11.751251967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:40.550874   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:56:40.608040   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:56:40.608059   68520 retry.go:31] will retry after 20.064818138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:57:00.672741   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:57:00.728047   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:57:00.728065   68520 retry.go:31] will retry after 28.098462815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:57:28.827702   68520 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1025 17:57:28.885239   68520 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1025 17:57:28.907140   68520 out.go:177] 
	W1025 17:57:28.927809   68520 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W1025 17:57:28.927828   68520 out.go:239] * 
	* 
	W1025 17:57:28.935104   68520 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 17:57:28.955945   68520 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-207000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-207000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af",
	        "Created": "2023-10-26T00:50:11.45218617Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 54004,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T00:50:11.672937643Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af/hostname",
	        "HostsPath": "/var/lib/docker/containers/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af/hosts",
	        "LogPath": "/var/lib/docker/containers/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af-json.log",
	        "Name": "/ingress-addon-legacy-207000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-207000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-207000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b29294a152213f924cd256e4d9c1607396bbf63cd0cfb740ad6be1bc2590dbc1-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b29294a152213f924cd256e4d9c1607396bbf63cd0cfb740ad6be1bc2590dbc1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b29294a152213f924cd256e4d9c1607396bbf63cd0cfb740ad6be1bc2590dbc1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b29294a152213f924cd256e4d9c1607396bbf63cd0cfb740ad6be1bc2590dbc1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-207000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-207000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-207000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-207000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-207000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fa617ca84be928fb641d71e507687001df9d969cb373b11229a32157d4e239bc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56749"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56750"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56751"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56752"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56753"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fa617ca84be9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-207000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "20ad9a1940d7",
	                        "ingress-addon-legacy-207000"
	                    ],
	                    "NetworkID": "68e6ff35ac40bf16032258f0ef4ab598b319b9c2951115a614f605d485cb3257",
	                    "EndpointID": "d011ca93176698fbf61e9ac3f2fadfeda70560f9b4e87e87b20dedac50e67f8b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-207000 -n ingress-addon-legacy-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-207000 -n ingress-addon-legacy-207000: exit status 6 (380.588838ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 17:57:29.402967   68550 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-207000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-207000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (88.82s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:200: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-207000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-207000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af",
	        "Created": "2023-10-26T00:50:11.45218617Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 54004,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T00:50:11.672937643Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af/hostname",
	        "HostsPath": "/var/lib/docker/containers/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af/hosts",
	        "LogPath": "/var/lib/docker/containers/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af/20ad9a1940d7850d05a82825a98a0757ef486bdcad9e3d129578b9ea0a60a8af-json.log",
	        "Name": "/ingress-addon-legacy-207000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-207000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-207000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b29294a152213f924cd256e4d9c1607396bbf63cd0cfb740ad6be1bc2590dbc1-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b29294a152213f924cd256e4d9c1607396bbf63cd0cfb740ad6be1bc2590dbc1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b29294a152213f924cd256e4d9c1607396bbf63cd0cfb740ad6be1bc2590dbc1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b29294a152213f924cd256e4d9c1607396bbf63cd0cfb740ad6be1bc2590dbc1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-207000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-207000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-207000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-207000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-207000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fa617ca84be928fb641d71e507687001df9d969cb373b11229a32157d4e239bc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56749"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56750"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56751"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56752"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56753"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fa617ca84be9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-207000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "20ad9a1940d7",
	                        "ingress-addon-legacy-207000"
	                    ],
	                    "NetworkID": "68e6ff35ac40bf16032258f0ef4ab598b319b9c2951115a614f605d485cb3257",
	                    "EndpointID": "d011ca93176698fbf61e9ac3f2fadfeda70560f9b4e87e87b20dedac50e67f8b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-207000 -n ingress-addon-legacy-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-207000 -n ingress-addon-legacy-207000: exit status 6 (381.993256ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 17:57:29.838304   68562 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-207000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-207000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (91.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (69.053973ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- rollout status deployment/busybox: exit status 1 (65.009075ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (67.909674ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (75.28406ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.141853ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.551249ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (60.930962ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.541375ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.969828ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.79362ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E1025 18:03:14.193542   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (60.461485ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.284077ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.729218ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (83.377147ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- exec  -- nslookup kubernetes.io: exit status 1 (76.661242ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- exec  -- nslookup kubernetes.default: exit status 1 (59.481273ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (59.490584ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-971000
helpers_test.go:235: (dbg) docker inspect multinode-971000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28ed23c726e29cfb999cf2223ecb2dcd787dc53207800cfd930b06cb48193932",
	        "Created": "2023-10-26T01:01:54.157875975Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 105004,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:01:54.3860038Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/28ed23c726e29cfb999cf2223ecb2dcd787dc53207800cfd930b06cb48193932/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28ed23c726e29cfb999cf2223ecb2dcd787dc53207800cfd930b06cb48193932/hostname",
	        "HostsPath": "/var/lib/docker/containers/28ed23c726e29cfb999cf2223ecb2dcd787dc53207800cfd930b06cb48193932/hosts",
	        "LogPath": "/var/lib/docker/containers/28ed23c726e29cfb999cf2223ecb2dcd787dc53207800cfd930b06cb48193932/28ed23c726e29cfb999cf2223ecb2dcd787dc53207800cfd930b06cb48193932-json.log",
	        "Name": "/multinode-971000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-971000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-971000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b762f3f580f129f804dc6a2edaf0f83875285fdf861fdfa66b8e013332791b02-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b762f3f580f129f804dc6a2edaf0f83875285fdf861fdfa66b8e013332791b02/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b762f3f580f129f804dc6a2edaf0f83875285fdf861fdfa66b8e013332791b02/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b762f3f580f129f804dc6a2edaf0f83875285fdf861fdfa66b8e013332791b02/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-971000",
	                "Source": "/var/lib/docker/volumes/multinode-971000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-971000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-971000",
	                "name.minikube.sigs.k8s.io": "multinode-971000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d47fbd75dccf7d881c8249d14e2d98d0305d3a03187922e80cee12c0b3675f3d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57079"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57080"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57081"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57082"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57083"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d47fbd75dccf",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-971000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "28ed23c726e2",
	                        "multinode-971000"
	                    ],
	                    "NetworkID": "57776fd0c26f2b12bbfd7c05969e8e301d089428ef095f98a16fe04bd9335135",
	                    "EndpointID": "5687b7332f76a8cd6692e1de5b9a5007f38379ff68b6d46dfe1e94bdef9452e3",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-971000 logs -n 25: (2.355846572s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p mount-start-1-034000                           | mount-start-1-034000 | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:01 PDT |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-049000 ssh -- ls                    | mount-start-2-049000 | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:01 PDT |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-049000                           | mount-start-2-049000 | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:01 PDT |
	| start   | -p mount-start-2-049000                           | mount-start-2-049000 | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:01 PDT |
	| ssh     | mount-start-2-049000 ssh -- ls                    | mount-start-2-049000 | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:01 PDT |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-049000                           | mount-start-2-049000 | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:01 PDT |
	| delete  | -p mount-start-1-034000                           | mount-start-1-034000 | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:01 PDT |
	| start   | -p multinode-971000                               | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:02 PDT |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- apply -f                   | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- rollout                    | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:03 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:03 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:03 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:03 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:04 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:04 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- exec                       | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:04 PDT |                     |
	|         | -- nslookup kubernetes.io                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- exec                       | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:04 PDT |                     |
	|         | -- nslookup kubernetes.default                    |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000                               | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:04 PDT |                     |
	|         | -- exec  -- nslookup                              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 18:01:49
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 18:01:49.498888   70293 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:01:49.499167   70293 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:01:49.499173   70293 out.go:309] Setting ErrFile to fd 2...
	I1025 18:01:49.499177   70293 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:01:49.499348   70293 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 18:01:49.500744   70293 out.go:303] Setting JSON to false
	I1025 18:01:49.522571   70293 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":32477,"bootTime":1698249632,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 18:01:49.522684   70293 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 18:01:49.544288   70293 out.go:177] * [multinode-971000] minikube v1.31.2 on Darwin 14.0
	I1025 18:01:49.588058   70293 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 18:01:49.588138   70293 notify.go:220] Checking for updates...
	I1025 18:01:49.632070   70293 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:01:49.674869   70293 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 18:01:49.696126   70293 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:01:49.717947   70293 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 18:01:49.739211   70293 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:01:49.761353   70293 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 18:01:49.819296   70293 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 18:01:49.819455   70293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:01:49.922795   70293 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-26 01:01:49.910840689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:01:49.965276   70293 out.go:177] * Using the docker driver based on user configuration
	I1025 18:01:49.987014   70293 start.go:298] selected driver: docker
	I1025 18:01:49.987040   70293 start.go:902] validating driver "docker" against <nil>
	I1025 18:01:49.987056   70293 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:01:49.991115   70293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:01:50.092309   70293 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-26 01:01:50.08116753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfine
d name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages
Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sco
ut Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:01:50.092501   70293 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 18:01:50.092690   70293 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:01:50.115236   70293 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 18:01:50.136038   70293 cni.go:84] Creating CNI manager for ""
	I1025 18:01:50.136068   70293 cni.go:136] 0 nodes found, recommending kindnet
	I1025 18:01:50.136082   70293 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 18:01:50.136104   70293 start_flags.go:323] config:
	{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:01:50.180213   70293 out.go:177] * Starting control plane node multinode-971000 in cluster multinode-971000
	I1025 18:01:50.202306   70293 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 18:01:50.224198   70293 out.go:177] * Pulling base image ...
	I1025 18:01:50.268552   70293 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:01:50.268618   70293 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 18:01:50.268647   70293 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 18:01:50.268663   70293 cache.go:56] Caching tarball of preloaded images
	I1025 18:01:50.268853   70293 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 18:01:50.268873   70293 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 18:01:50.270544   70293 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/config.json ...
	I1025 18:01:50.270652   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/config.json: {Name:mk1243f5af0e9ee909e7b7748d23b2f2b24a7412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:50.320506   70293 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 18:01:50.320523   70293 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 18:01:50.320548   70293 cache.go:194] Successfully downloaded all kic artifacts
	I1025 18:01:50.320594   70293 start.go:365] acquiring machines lock for multinode-971000: {Name:mk01e6cc063ed20be62de6672a43541267a64e02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:01:50.320762   70293 start.go:369] acquired machines lock for "multinode-971000" in 152.785µs
	I1025 18:01:50.320790   70293 start.go:93] Provisioning new machine with config: &{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:01:50.320876   70293 start.go:125] createHost starting for "" (driver="docker")
	I1025 18:01:50.347347   70293 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 18:01:50.347730   70293 start.go:159] libmachine.API.Create for "multinode-971000" (driver="docker")
	I1025 18:01:50.347820   70293 client.go:168] LocalClient.Create starting
	I1025 18:01:50.348014   70293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem
	I1025 18:01:50.348121   70293 main.go:141] libmachine: Decoding PEM data...
	I1025 18:01:50.348158   70293 main.go:141] libmachine: Parsing certificate...
	I1025 18:01:50.348270   70293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem
	I1025 18:01:50.348334   70293 main.go:141] libmachine: Decoding PEM data...
	I1025 18:01:50.348354   70293 main.go:141] libmachine: Parsing certificate...
	I1025 18:01:50.369560   70293 cli_runner.go:164] Run: docker network inspect multinode-971000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 18:01:50.422778   70293 cli_runner.go:211] docker network inspect multinode-971000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 18:01:50.422880   70293 network_create.go:281] running [docker network inspect multinode-971000] to gather additional debugging logs...
	I1025 18:01:50.422896   70293 cli_runner.go:164] Run: docker network inspect multinode-971000
	W1025 18:01:50.474299   70293 cli_runner.go:211] docker network inspect multinode-971000 returned with exit code 1
	I1025 18:01:50.474326   70293 network_create.go:284] error running [docker network inspect multinode-971000]: docker network inspect multinode-971000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-971000 not found
	I1025 18:01:50.474339   70293 network_create.go:286] output of [docker network inspect multinode-971000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-971000 not found
	
	** /stderr **
	I1025 18:01:50.474464   70293 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 18:01:50.526790   70293 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1025 18:01:50.527186   70293 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00229d180}
	I1025 18:01:50.527204   70293 network_create.go:124] attempt to create docker network multinode-971000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1025 18:01:50.527272   70293 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-971000 multinode-971000
	I1025 18:01:50.614614   70293 network_create.go:108] docker network multinode-971000 192.168.58.0/24 created
	I1025 18:01:50.614649   70293 kic.go:118] calculated static IP "192.168.58.2" for the "multinode-971000" container
	I1025 18:01:50.614751   70293 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 18:01:50.665569   70293 cli_runner.go:164] Run: docker volume create multinode-971000 --label name.minikube.sigs.k8s.io=multinode-971000 --label created_by.minikube.sigs.k8s.io=true
	I1025 18:01:50.717387   70293 oci.go:103] Successfully created a docker volume multinode-971000
	I1025 18:01:50.717497   70293 cli_runner.go:164] Run: docker run --rm --name multinode-971000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-971000 --entrypoint /usr/bin/test -v multinode-971000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 18:01:51.131295   70293 oci.go:107] Successfully prepared a docker volume multinode-971000
	I1025 18:01:51.131332   70293 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:01:51.131343   70293 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 18:01:51.131423   70293 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-971000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 18:01:54.005955   70293 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-971000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (2.87440013s)
	I1025 18:01:54.005981   70293 kic.go:200] duration metric: took 2.874548 seconds to extract preloaded images to volume
	I1025 18:01:54.006096   70293 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 18:01:54.108416   70293 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-971000 --name multinode-971000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-971000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-971000 --network multinode-971000 --ip 192.168.58.2 --volume multinode-971000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 18:01:54.394985   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Running}}
	I1025 18:01:54.453726   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:01:54.541781   70293 cli_runner.go:164] Run: docker exec multinode-971000 stat /var/lib/dpkg/alternatives/iptables
	I1025 18:01:54.656498   70293 oci.go:144] the created container "multinode-971000" has a running status.
	I1025 18:01:54.656541   70293 kic.go:222] Creating ssh key for kic: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa...
	I1025 18:01:54.881053   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 18:01:54.881112   70293 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 18:01:54.951847   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:01:55.009155   70293 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 18:01:55.009180   70293 kic_runner.go:114] Args: [docker exec --privileged multinode-971000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 18:01:55.107537   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:01:55.159059   70293 machine.go:88] provisioning docker machine ...
	I1025 18:01:55.159102   70293 ubuntu.go:169] provisioning hostname "multinode-971000"
	I1025 18:01:55.159199   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:55.210385   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:01:55.210713   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57079 <nil> <nil>}
	I1025 18:01:55.210727   70293 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-971000 && echo "multinode-971000" | sudo tee /etc/hostname
	I1025 18:01:55.343625   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-971000
	
	I1025 18:01:55.343723   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:55.395137   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:01:55.395430   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57079 <nil> <nil>}
	I1025 18:01:55.395444   70293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-971000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-971000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-971000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:01:55.518871   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:01:55.518894   70293 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 18:01:55.518923   70293 ubuntu.go:177] setting up certificates
	I1025 18:01:55.518934   70293 provision.go:83] configureAuth start
	I1025 18:01:55.519014   70293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000
	I1025 18:01:55.569972   70293 provision.go:138] copyHostCerts
	I1025 18:01:55.570012   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:01:55.570064   70293 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 18:01:55.570071   70293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:01:55.570140   70293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 18:01:55.570366   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:01:55.570392   70293 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 18:01:55.570396   70293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:01:55.570467   70293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 18:01:55.570645   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:01:55.570680   70293 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 18:01:55.570685   70293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:01:55.570749   70293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 18:01:55.570908   70293 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.multinode-971000 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-971000]
	I1025 18:01:55.688749   70293 provision.go:172] copyRemoteCerts
	I1025 18:01:55.688802   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:01:55.688860   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:55.740262   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:01:55.829152   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 18:01:55.829232   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 18:01:55.851809   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 18:01:55.851877   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 18:01:55.874394   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 18:01:55.874466   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 18:01:55.897116   70293 provision.go:86] duration metric: configureAuth took 378.15699ms
	I1025 18:01:55.897134   70293 ubuntu.go:193] setting minikube options for container-runtime
	I1025 18:01:55.897271   70293 config.go:182] Loaded profile config "multinode-971000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:01:55.897330   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:55.950368   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:01:55.950681   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57079 <nil> <nil>}
	I1025 18:01:55.950695   70293 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:01:56.073112   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 18:01:56.073127   70293 ubuntu.go:71] root file system type: overlay
	I1025 18:01:56.073209   70293 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:01:56.073290   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:56.124374   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:01:56.124685   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57079 <nil> <nil>}
	I1025 18:01:56.124742   70293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:01:56.257703   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:01:56.257827   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:56.309694   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:01:56.309997   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57079 <nil> <nil>}
	I1025 18:01:56.310012   70293 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:01:56.904475   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:01:56.254499664 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 18:01:56.904507   70293 machine.go:91] provisioned docker machine in 1.745372989s
	I1025 18:01:56.904514   70293 client.go:171] LocalClient.Create took 6.556487105s
	I1025 18:01:56.904535   70293 start.go:167] duration metric: libmachine.API.Create for "multinode-971000" took 6.556612493s
	I1025 18:01:56.904544   70293 start.go:300] post-start starting for "multinode-971000" (driver="docker")
	I1025 18:01:56.904552   70293 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:01:56.904625   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:01:56.904678   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:56.957840   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:01:57.049566   70293 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:01:57.053697   70293 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1025 18:01:57.053706   70293 command_runner.go:130] > NAME="Ubuntu"
	I1025 18:01:57.053711   70293 command_runner.go:130] > VERSION_ID="22.04"
	I1025 18:01:57.053717   70293 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1025 18:01:57.053730   70293 command_runner.go:130] > VERSION_CODENAME=jammy
	I1025 18:01:57.053735   70293 command_runner.go:130] > ID=ubuntu
	I1025 18:01:57.053738   70293 command_runner.go:130] > ID_LIKE=debian
	I1025 18:01:57.053743   70293 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1025 18:01:57.053749   70293 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1025 18:01:57.053756   70293 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1025 18:01:57.053762   70293 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1025 18:01:57.053766   70293 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1025 18:01:57.053805   70293 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 18:01:57.053833   70293 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 18:01:57.053840   70293 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 18:01:57.053845   70293 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 18:01:57.053856   70293 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 18:01:57.053958   70293 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 18:01:57.054128   70293 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 18:01:57.054135   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /etc/ssl/certs/652922.pem
	I1025 18:01:57.054308   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:01:57.063286   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:01:57.085695   70293 start.go:303] post-start completed in 181.137729ms
	I1025 18:01:57.086222   70293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000
	I1025 18:01:57.137424   70293 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/config.json ...
	I1025 18:01:57.137882   70293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 18:01:57.137948   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:57.188976   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:01:57.274968   70293 command_runner.go:130] > 6%!
	(MISSING)I1025 18:01:57.275055   70293 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 18:01:57.280148   70293 command_runner.go:130] > 92G
	I1025 18:01:57.280474   70293 start.go:128] duration metric: createHost completed in 6.959376519s
	I1025 18:01:57.280492   70293 start.go:83] releasing machines lock for "multinode-971000", held for 6.959510468s
	I1025 18:01:57.280584   70293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000
	I1025 18:01:57.331602   70293 ssh_runner.go:195] Run: cat /version.json
	I1025 18:01:57.331623   70293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:01:57.331675   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:57.331687   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:57.389933   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:01:57.390145   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:01:57.583210   70293 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1025 18:01:57.585545   70293 command_runner.go:130] > {"iso_version": "v1.31.0-1697471113-17434", "kicbase_version": "v0.0.40-1698055645-17423", "minikube_version": "v1.31.2", "commit": "585245745aba695f9444ad633713942a6eacd882"}
	I1025 18:01:57.585678   70293 ssh_runner.go:195] Run: systemctl --version
	I1025 18:01:57.590825   70293 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1025 18:01:57.590853   70293 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1025 18:01:57.590922   70293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 18:01:57.596062   70293 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1025 18:01:57.596081   70293 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1025 18:01:57.596086   70293 command_runner.go:130] > Device: a4h/164d	Inode: 1048758     Links: 1
	I1025 18:01:57.596091   70293 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 18:01:57.596096   70293 command_runner.go:130] > Access: 2023-10-26 00:39:30.354217175 +0000
	I1025 18:01:57.596100   70293 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1025 18:01:57.596105   70293 command_runner.go:130] > Change: 2023-10-26 00:39:14.867105012 +0000
	I1025 18:01:57.596110   70293 command_runner.go:130] >  Birth: 2023-10-26 00:39:14.867105012 +0000
	I1025 18:01:57.596398   70293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 18:01:57.620904   70293 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 18:01:57.620968   70293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 18:01:57.646596   70293 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1025 18:01:57.646627   70293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1025 18:01:57.646635   70293 start.go:472] detecting cgroup driver to use...
	I1025 18:01:57.646649   70293 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:01:57.646764   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:01:57.662197   70293 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1025 18:01:57.663133   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 18:01:57.673421   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:01:57.683772   70293 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:01:57.683835   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:01:57.694297   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:01:57.704507   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:01:57.714905   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:01:57.725371   70293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:01:57.735204   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:01:57.745792   70293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:01:57.754192   70293 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1025 18:01:57.754815   70293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:01:57.763765   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:01:57.822145   70293 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:01:57.899919   70293 start.go:472] detecting cgroup driver to use...
	I1025 18:01:57.899939   70293 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:01:57.900011   70293 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:01:57.916708   70293 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1025 18:01:57.916813   70293 command_runner.go:130] > [Unit]
	I1025 18:01:57.916822   70293 command_runner.go:130] > Description=Docker Application Container Engine
	I1025 18:01:57.916827   70293 command_runner.go:130] > Documentation=https://docs.docker.com
	I1025 18:01:57.916832   70293 command_runner.go:130] > BindsTo=containerd.service
	I1025 18:01:57.916837   70293 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1025 18:01:57.916841   70293 command_runner.go:130] > Wants=network-online.target
	I1025 18:01:57.916847   70293 command_runner.go:130] > Requires=docker.socket
	I1025 18:01:57.916851   70293 command_runner.go:130] > StartLimitBurst=3
	I1025 18:01:57.916855   70293 command_runner.go:130] > StartLimitIntervalSec=60
	I1025 18:01:57.916858   70293 command_runner.go:130] > [Service]
	I1025 18:01:57.916862   70293 command_runner.go:130] > Type=notify
	I1025 18:01:57.916867   70293 command_runner.go:130] > Restart=on-failure
	I1025 18:01:57.916876   70293 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1025 18:01:57.916889   70293 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1025 18:01:57.916895   70293 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1025 18:01:57.916901   70293 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1025 18:01:57.916908   70293 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1025 18:01:57.916924   70293 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1025 18:01:57.916934   70293 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1025 18:01:57.916944   70293 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1025 18:01:57.916949   70293 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1025 18:01:57.916953   70293 command_runner.go:130] > ExecStart=
	I1025 18:01:57.916964   70293 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1025 18:01:57.916972   70293 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1025 18:01:57.916977   70293 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1025 18:01:57.916983   70293 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1025 18:01:57.916986   70293 command_runner.go:130] > LimitNOFILE=infinity
	I1025 18:01:57.916990   70293 command_runner.go:130] > LimitNPROC=infinity
	I1025 18:01:57.916993   70293 command_runner.go:130] > LimitCORE=infinity
	I1025 18:01:57.916998   70293 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1025 18:01:57.917004   70293 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1025 18:01:57.917007   70293 command_runner.go:130] > TasksMax=infinity
	I1025 18:01:57.917011   70293 command_runner.go:130] > TimeoutStartSec=0
	I1025 18:01:57.917017   70293 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1025 18:01:57.917021   70293 command_runner.go:130] > Delegate=yes
	I1025 18:01:57.917028   70293 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1025 18:01:57.917032   70293 command_runner.go:130] > KillMode=process
	I1025 18:01:57.917048   70293 command_runner.go:130] > [Install]
	I1025 18:01:57.917057   70293 command_runner.go:130] > WantedBy=multi-user.target
	I1025 18:01:57.917748   70293 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 18:01:57.917808   70293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:01:57.930232   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:01:57.947889   70293 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 18:01:57.949202   70293 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:01:57.954285   70293 command_runner.go:130] > /usr/bin/cri-dockerd
	I1025 18:01:57.954413   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:01:57.965521   70293 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:01:57.984228   70293 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:01:58.071964   70293 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:01:58.170072   70293 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:01:58.170189   70293 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:01:58.189527   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:01:58.288374   70293 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:01:58.539216   70293 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:01:58.603408   70293 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I1025 18:01:58.603476   70293 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 18:01:58.670052   70293 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:01:58.725701   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:01:58.787700   70293 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 18:01:58.812973   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:01:58.881299   70293 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 18:01:58.963610   70293 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 18:01:58.963712   70293 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 18:01:58.969075   70293 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1025 18:01:58.969103   70293 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1025 18:01:58.969113   70293 command_runner.go:130] > Device: ach/172d	Inode: 267         Links: 1
	I1025 18:01:58.969131   70293 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1025 18:01:58.969140   70293 command_runner.go:130] > Access: 2023-10-26 01:01:58.891782085 +0000
	I1025 18:01:58.969161   70293 command_runner.go:130] > Modify: 2023-10-26 01:01:58.891782085 +0000
	I1025 18:01:58.969169   70293 command_runner.go:130] > Change: 2023-10-26 01:01:58.902782086 +0000
	I1025 18:01:58.969174   70293 command_runner.go:130] >  Birth: 2023-10-26 01:01:58.891782085 +0000
	I1025 18:01:58.969204   70293 start.go:540] Will wait 60s for crictl version
	I1025 18:01:58.969262   70293 ssh_runner.go:195] Run: which crictl
	I1025 18:01:58.973720   70293 command_runner.go:130] > /usr/bin/crictl
	I1025 18:01:58.973798   70293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 18:01:59.017439   70293 command_runner.go:130] > Version:  0.1.0
	I1025 18:01:59.017452   70293 command_runner.go:130] > RuntimeName:  docker
	I1025 18:01:59.017456   70293 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1025 18:01:59.017461   70293 command_runner.go:130] > RuntimeApiVersion:  v1
	I1025 18:01:59.019509   70293 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 18:01:59.019591   70293 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:01:59.044305   70293 command_runner.go:130] > 24.0.6
	I1025 18:01:59.045447   70293 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:01:59.070693   70293 command_runner.go:130] > 24.0.6
	I1025 18:01:59.117235   70293 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 18:01:59.117414   70293 cli_runner.go:164] Run: docker exec -t multinode-971000 dig +short host.docker.internal
	I1025 18:01:59.237590   70293 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 18:01:59.237698   70293 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 18:01:59.242851   70293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:01:59.254373   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:59.305845   70293 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:01:59.305914   70293 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:01:59.326533   70293 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1025 18:01:59.326546   70293 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1025 18:01:59.326550   70293 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1025 18:01:59.326556   70293 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1025 18:01:59.326560   70293 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1025 18:01:59.326564   70293 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1025 18:01:59.326568   70293 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1025 18:01:59.326575   70293 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:01:59.327562   70293 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:01:59.327587   70293 docker.go:623] Images already preloaded, skipping extraction
	I1025 18:01:59.327679   70293 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:01:59.347016   70293 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1025 18:01:59.347030   70293 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1025 18:01:59.347041   70293 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1025 18:01:59.347048   70293 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1025 18:01:59.347054   70293 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1025 18:01:59.347061   70293 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1025 18:01:59.347067   70293 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1025 18:01:59.347081   70293 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:01:59.348141   70293 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:01:59.348163   70293 cache_images.go:84] Images are preloaded, skipping loading
	I1025 18:01:59.348243   70293 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:01:59.399449   70293 command_runner.go:130] > cgroupfs
	I1025 18:01:59.400592   70293 cni.go:84] Creating CNI manager for ""
	I1025 18:01:59.400605   70293 cni.go:136] 1 nodes found, recommending kindnet
	I1025 18:01:59.400623   70293 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 18:01:59.400638   70293 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-971000 NodeName:multinode-971000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 18:01:59.400755   70293 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-971000"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:01:59.400817   70293 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-971000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 18:01:59.400876   70293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 18:01:59.409970   70293 command_runner.go:130] > kubeadm
	I1025 18:01:59.409979   70293 command_runner.go:130] > kubectl
	I1025 18:01:59.409982   70293 command_runner.go:130] > kubelet
	I1025 18:01:59.410647   70293 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:01:59.410699   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 18:01:59.419780   70293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1025 18:01:59.436651   70293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:01:59.453548   70293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1025 18:01:59.470877   70293 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1025 18:01:59.475384   70293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:01:59.486947   70293 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000 for IP: 192.168.58.2
	I1025 18:01:59.486966   70293 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.487154   70293 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 18:01:59.487223   70293 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 18:01:59.487272   70293 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key
	I1025 18:01:59.487287   70293 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt with IP's: []
	I1025 18:01:59.600039   70293 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt ...
	I1025 18:01:59.600051   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt: {Name:mk64559d4fe4512acb57c5db6c94d26b48ee9a4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.600343   70293 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key ...
	I1025 18:01:59.600350   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key: {Name:mka03e9a439d934e99e8b908d2bbdfdb23cd0f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.600548   70293 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key.cee25041
	I1025 18:01:59.600562   70293 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 18:01:59.707555   70293 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt.cee25041 ...
	I1025 18:01:59.707565   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt.cee25041: {Name:mke095aa049bba03566453c031a11ef4f396369d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.707812   70293 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key.cee25041 ...
	I1025 18:01:59.707819   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key.cee25041: {Name:mkf0f5901f2f09a7b9f8ee0fb2794acddc7a12d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.708013   70293 certs.go:337] copying /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt
	I1025 18:01:59.708178   70293 certs.go:341] copying /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key
	I1025 18:01:59.708335   70293 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.key
	I1025 18:01:59.708348   70293 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.crt with IP's: []
	I1025 18:01:59.801029   70293 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.crt ...
	I1025 18:01:59.801041   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.crt: {Name:mk28fa6a995bfac0944ebe68223bd61e361107f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.801296   70293 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.key ...
	I1025 18:01:59.801309   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.key: {Name:mk500d9606cd847ad8de5d70ff22cad1de5293f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.801493   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 18:01:59.801518   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 18:01:59.801535   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 18:01:59.801560   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 18:01:59.801577   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 18:01:59.801594   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 18:01:59.801609   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 18:01:59.801625   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 18:01:59.801716   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 18:01:59.801762   70293 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 18:01:59.801775   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 18:01:59.801803   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 18:01:59.801830   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:01:59.801863   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 18:01:59.801928   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:01:59.801963   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /usr/share/ca-certificates/652922.pem
	I1025 18:01:59.801982   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:01:59.802000   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem -> /usr/share/ca-certificates/65292.pem
	I1025 18:01:59.802521   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 18:01:59.826095   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 18:01:59.848783   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 18:01:59.872111   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 18:01:59.895606   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:01:59.918389   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 18:01:59.941164   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:01:59.963937   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:01:59.986871   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 18:02:00.010582   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:02:00.033652   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 18:02:00.056132   70293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 18:02:00.073458   70293 ssh_runner.go:195] Run: openssl version
	I1025 18:02:00.079135   70293 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1025 18:02:00.079418   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:02:00.089785   70293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:00.094373   70293 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:00.094399   70293 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:00.094440   70293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:00.101107   70293 command_runner.go:130] > b5213941
	I1025 18:02:00.101461   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:02:00.111662   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 18:02:00.121777   70293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 18:02:00.126219   70293 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:02:00.126242   70293 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:02:00.126289   70293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 18:02:00.133298   70293 command_runner.go:130] > 51391683
	I1025 18:02:00.133526   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 18:02:00.143703   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 18:02:00.153994   70293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 18:02:00.158598   70293 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:02:00.158620   70293 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:02:00.158671   70293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 18:02:00.165457   70293 command_runner.go:130] > 3ec20f2e
	I1025 18:02:00.165645   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:02:00.175603   70293 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 18:02:00.180129   70293 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 18:02:00.180146   70293 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 18:02:00.180187   70293 kubeadm.go:404] StartCluster: {Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:02:00.180288   70293 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:02:00.200864   70293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 18:02:00.209861   70293 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1025 18:02:00.209873   70293 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1025 18:02:00.209879   70293 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1025 18:02:00.210643   70293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:02:00.219814   70293 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 18:02:00.219869   70293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:02:00.229175   70293 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1025 18:02:00.229193   70293 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1025 18:02:00.229199   70293 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1025 18:02:00.229208   70293 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:02:00.229223   70293 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:02:00.229248   70293 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 18:02:00.271409   70293 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1025 18:02:00.271425   70293 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1025 18:02:00.271470   70293 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 18:02:00.271483   70293 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 18:02:00.393950   70293 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:02:00.393996   70293 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:02:00.394082   70293 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:02:00.394090   70293 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:02:00.394201   70293 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:02:00.394215   70293 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:02:00.675530   70293 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:02:00.675549   70293 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:02:00.717618   70293 out.go:204]   - Generating certificates and keys ...
	I1025 18:02:00.717677   70293 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1025 18:02:00.717690   70293 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 18:02:00.717787   70293 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1025 18:02:00.717798   70293 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 18:02:01.017813   70293 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 18:02:01.017828   70293 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 18:02:01.216080   70293 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 18:02:01.216120   70293 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1025 18:02:01.361073   70293 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 18:02:01.361083   70293 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1025 18:02:01.497350   70293 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 18:02:01.497407   70293 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1025 18:02:01.587903   70293 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 18:02:01.587918   70293 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1025 18:02:01.588033   70293 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-971000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1025 18:02:01.588043   70293 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-971000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1025 18:02:01.831660   70293 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 18:02:01.831684   70293 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1025 18:02:01.831795   70293 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-971000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1025 18:02:01.831803   70293 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-971000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1025 18:02:02.187274   70293 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 18:02:02.187290   70293 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 18:02:02.327439   70293 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 18:02:02.327452   70293 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 18:02:02.556543   70293 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 18:02:02.556568   70293 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1025 18:02:02.556614   70293 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:02:02.556639   70293 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:02:02.675830   70293 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:02:02.675840   70293 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:02:02.770986   70293 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:02:02.770997   70293 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:02:02.975096   70293 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:02:02.975110   70293 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:02:03.129244   70293 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:02:03.129263   70293 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:02:03.129734   70293 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:02:03.129747   70293 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:02:03.132943   70293 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:02:03.132958   70293 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:02:03.154525   70293 out.go:204]   - Booting up control plane ...
	I1025 18:02:03.154607   70293 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:02:03.154612   70293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:02:03.154683   70293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:02:03.154692   70293 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:02:03.154771   70293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:02:03.154775   70293 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:02:03.154866   70293 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:02:03.154874   70293 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:02:03.154974   70293 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:02:03.154989   70293 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:02:03.155046   70293 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 18:02:03.155052   70293 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1025 18:02:03.220245   70293 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:02:03.220261   70293 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:02:08.223537   70293 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002912 seconds
	I1025 18:02:08.223563   70293 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002912 seconds
	I1025 18:02:08.223741   70293 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 18:02:08.223756   70293 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 18:02:08.234134   70293 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 18:02:08.234149   70293 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 18:02:08.751449   70293 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 18:02:08.751467   70293 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1025 18:02:08.751629   70293 kubeadm.go:322] [mark-control-plane] Marking the node multinode-971000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 18:02:08.751648   70293 command_runner.go:130] > [mark-control-plane] Marking the node multinode-971000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 18:02:09.259722   70293 kubeadm.go:322] [bootstrap-token] Using token: g4l4ie.shzm0oxmox6k5n03
	I1025 18:02:09.259733   70293 command_runner.go:130] > [bootstrap-token] Using token: g4l4ie.shzm0oxmox6k5n03
	I1025 18:02:09.299274   70293 out.go:204]   - Configuring RBAC rules ...
	I1025 18:02:09.299385   70293 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 18:02:09.299396   70293 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 18:02:09.341578   70293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 18:02:09.341584   70293 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 18:02:09.347980   70293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 18:02:09.347996   70293 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 18:02:09.352873   70293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 18:02:09.352890   70293 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 18:02:09.356627   70293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 18:02:09.356635   70293 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 18:02:09.360157   70293 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 18:02:09.360176   70293 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 18:02:09.369969   70293 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 18:02:09.369981   70293 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 18:02:09.550694   70293 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1025 18:02:09.550711   70293 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1025 18:02:09.748835   70293 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1025 18:02:09.748877   70293 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1025 18:02:09.750071   70293 kubeadm.go:322] 
	I1025 18:02:09.750163   70293 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1025 18:02:09.750213   70293 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1025 18:02:09.750229   70293 kubeadm.go:322] 
	I1025 18:02:09.750317   70293 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1025 18:02:09.750328   70293 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1025 18:02:09.750334   70293 kubeadm.go:322] 
	I1025 18:02:09.750363   70293 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1025 18:02:09.750370   70293 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1025 18:02:09.750451   70293 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 18:02:09.750459   70293 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 18:02:09.750523   70293 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 18:02:09.750536   70293 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 18:02:09.750550   70293 kubeadm.go:322] 
	I1025 18:02:09.750670   70293 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1025 18:02:09.750681   70293 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1025 18:02:09.750688   70293 kubeadm.go:322] 
	I1025 18:02:09.750765   70293 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 18:02:09.750776   70293 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 18:02:09.750783   70293 kubeadm.go:322] 
	I1025 18:02:09.750852   70293 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1025 18:02:09.750870   70293 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1025 18:02:09.751017   70293 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 18:02:09.751037   70293 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 18:02:09.751143   70293 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 18:02:09.751157   70293 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 18:02:09.751174   70293 kubeadm.go:322] 
	I1025 18:02:09.751294   70293 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1025 18:02:09.751338   70293 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 18:02:09.751484   70293 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1025 18:02:09.751499   70293 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1025 18:02:09.751513   70293 kubeadm.go:322] 
	I1025 18:02:09.751668   70293 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g4l4ie.shzm0oxmox6k5n03 \
	I1025 18:02:09.751681   70293 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token g4l4ie.shzm0oxmox6k5n03 \
	I1025 18:02:09.751827   70293 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d \
	I1025 18:02:09.751840   70293 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d \
	I1025 18:02:09.751867   70293 kubeadm.go:322] 	--control-plane 
	I1025 18:02:09.751873   70293 command_runner.go:130] > 	--control-plane 
	I1025 18:02:09.751883   70293 kubeadm.go:322] 
	I1025 18:02:09.752001   70293 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1025 18:02:09.752012   70293 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1025 18:02:09.752028   70293 kubeadm.go:322] 
	I1025 18:02:09.752229   70293 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g4l4ie.shzm0oxmox6k5n03 \
	I1025 18:02:09.752261   70293 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token g4l4ie.shzm0oxmox6k5n03 \
	I1025 18:02:09.752425   70293 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d 
	I1025 18:02:09.752430   70293 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d 
	I1025 18:02:09.754745   70293 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 18:02:09.754781   70293 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 18:02:09.754970   70293 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:02:09.754971   70293 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:02:09.754990   70293 cni.go:84] Creating CNI manager for ""
	I1025 18:02:09.755016   70293 cni.go:136] 1 nodes found, recommending kindnet
	I1025 18:02:09.793082   70293 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1025 18:02:09.835717   70293 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 18:02:09.843239   70293 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1025 18:02:09.843270   70293 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1025 18:02:09.843281   70293 command_runner.go:130] > Device: a4h/164d	Inode: 1049408     Links: 1
	I1025 18:02:09.843312   70293 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 18:02:09.843331   70293 command_runner.go:130] > Access: 2023-10-26 00:39:30.623217190 +0000
	I1025 18:02:09.843346   70293 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1025 18:02:09.843360   70293 command_runner.go:130] > Change: 2023-10-26 00:39:15.549105052 +0000
	I1025 18:02:09.843369   70293 command_runner.go:130] >  Birth: 2023-10-26 00:39:15.509105049 +0000
	I1025 18:02:09.843491   70293 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1025 18:02:09.843503   70293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1025 18:02:09.869816   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 18:02:10.474000   70293 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1025 18:02:10.478575   70293 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1025 18:02:10.485609   70293 command_runner.go:130] > serviceaccount/kindnet created
	I1025 18:02:10.492977   70293 command_runner.go:130] > daemonset.apps/kindnet created
	I1025 18:02:10.496737   70293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 18:02:10.496821   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc minikube.k8s.io/name=multinode-971000 minikube.k8s.io/updated_at=2023_10_25T18_02_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:10.496822   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:10.505919   70293 command_runner.go:130] > -16
	I1025 18:02:10.505955   70293 ops.go:34] apiserver oom_adj: -16
	I1025 18:02:10.576467   70293 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1025 18:02:10.576604   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:10.587651   70293 command_runner.go:130] > node/multinode-971000 labeled
	I1025 18:02:10.689253   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:10.689330   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:10.754773   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:11.255168   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:11.320163   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:11.755137   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:11.823194   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:12.255883   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:12.320450   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:12.755439   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:12.822236   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:13.255271   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:13.325545   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:13.755304   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:13.821854   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:14.255642   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:14.321994   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:14.755240   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:14.821743   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:15.255904   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:15.320078   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:15.757116   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:15.827033   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:16.256931   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:16.325906   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:16.755940   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:16.824699   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:17.257273   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:17.321523   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:17.755962   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:17.821408   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:18.255527   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:18.321416   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:18.756654   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:18.825537   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:19.256410   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:19.320937   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:19.755505   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:19.823916   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:20.257172   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:20.325270   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:20.757434   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:20.825351   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:21.255506   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:21.344623   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:21.755266   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:21.822277   70293 command_runner.go:130] > NAME      SECRETS   AGE
	I1025 18:02:21.822290   70293 command_runner.go:130] > default   0         0s
	I1025 18:02:21.822301   70293 kubeadm.go:1081] duration metric: took 11.325213593s to wait for elevateKubeSystemPrivileges.
	I1025 18:02:21.822317   70293 kubeadm.go:406] StartCluster complete in 21.641485667s
	I1025 18:02:21.822335   70293 settings.go:142] acquiring lock: {Name:mkca0a8fe84aa865309571104a1d51551b90d38c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:02:21.822418   70293 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:02:21.822969   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/kubeconfig: {Name:mka2fd80159d21a18312620daab0f942465327a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:02:21.823254   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 18:02:21.823272   70293 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 18:02:21.823317   70293 addons.go:69] Setting storage-provisioner=true in profile "multinode-971000"
	I1025 18:02:21.823329   70293 addons.go:69] Setting default-storageclass=true in profile "multinode-971000"
	I1025 18:02:21.823333   70293 addons.go:231] Setting addon storage-provisioner=true in "multinode-971000"
	I1025 18:02:21.823354   70293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-971000"
	I1025 18:02:21.823378   70293 config.go:182] Loaded profile config "multinode-971000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:02:21.823381   70293 host.go:66] Checking if "multinode-971000" exists ...
	I1025 18:02:21.823633   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:02:21.823651   70293 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:02:21.823783   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:02:21.824503   70293 kapi.go:59] client config for multinode-971000: &rest.Config{Host:"https://127.0.0.1:57083", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:02:21.828119   70293 cert_rotation.go:137] Starting client certificate rotation controller
	I1025 18:02:21.828412   70293 round_trippers.go:463] GET https://127.0.0.1:57083/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 18:02:21.828423   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:21.828431   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:21.828439   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:21.839605   70293 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1025 18:02:21.839619   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:21.839625   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:21.839644   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:21.839648   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:21.839674   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:21.839679   70293 round_trippers.go:580]     Content-Length: 291
	I1025 18:02:21.839683   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:21 GMT
	I1025 18:02:21.839688   70293 round_trippers.go:580]     Audit-Id: 0ba7391c-69af-48a7-8241-1bf6da20c3e7
	I1025 18:02:21.839758   70293 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"929058e7-d591-423d-8b82-e048f4d0d834","resourceVersion":"268","creationTimestamp":"2023-10-26T01:02:09Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1025 18:02:21.840272   70293 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"929058e7-d591-423d-8b82-e048f4d0d834","resourceVersion":"268","creationTimestamp":"2023-10-26T01:02:09Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1025 18:02:21.840308   70293 round_trippers.go:463] PUT https://127.0.0.1:57083/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 18:02:21.840313   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:21.840319   70293 round_trippers.go:473]     Content-Type: application/json
	I1025 18:02:21.840327   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:21.840333   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:21.846966   70293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1025 18:02:21.847003   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:21.847015   70293 round_trippers.go:580]     Audit-Id: 35a35a08-c3f1-4639-8d3a-053789656b40
	I1025 18:02:21.847023   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:21.847028   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:21.847042   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:21.847063   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:21.847102   70293 round_trippers.go:580]     Content-Length: 291
	I1025 18:02:21.847110   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:21 GMT
	I1025 18:02:21.847127   70293 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"929058e7-d591-423d-8b82-e048f4d0d834","resourceVersion":"335","creationTimestamp":"2023-10-26T01:02:09Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1025 18:02:21.847249   70293 round_trippers.go:463] GET https://127.0.0.1:57083/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 18:02:21.847255   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:21.847261   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:21.847267   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:21.852443   70293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 18:02:21.852461   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:21.852472   70293 round_trippers.go:580]     Content-Length: 291
	I1025 18:02:21.852481   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:21 GMT
	I1025 18:02:21.852488   70293 round_trippers.go:580]     Audit-Id: 3ce6c842-92e1-481a-999f-b0b84a1e30d0
	I1025 18:02:21.852496   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:21.852504   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:21.852515   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:21.852524   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:21.852549   70293 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"929058e7-d591-423d-8b82-e048f4d0d834","resourceVersion":"335","creationTimestamp":"2023-10-26T01:02:09Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1025 18:02:21.852631   70293 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-971000" context rescaled to 1 replicas
	I1025 18:02:21.852659   70293 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:02:21.874725   70293 out.go:177] * Verifying Kubernetes components...
	I1025 18:02:21.916448   70293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:02:21.945278   70293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:02:21.924235   70293 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:02:21.934213   70293 command_runner.go:130] > apiVersion: v1
	I1025 18:02:21.982263   70293 command_runner.go:130] > data:
	I1025 18:02:21.945522   70293 kapi.go:59] client config for multinode-971000: &rest.Config{Host:"https://127.0.0.1:57083", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:02:21.982298   70293 command_runner.go:130] >   Corefile: |
	I1025 18:02:21.982310   70293 command_runner.go:130] >     .:53 {
	I1025 18:02:21.982314   70293 command_runner.go:130] >         errors
	I1025 18:02:21.982341   70293 command_runner.go:130] >         health {
	I1025 18:02:21.982348   70293 command_runner.go:130] >            lameduck 5s
	I1025 18:02:21.982349   70293 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:02:21.982352   70293 command_runner.go:130] >         }
	I1025 18:02:21.982361   70293 command_runner.go:130] >         ready
	I1025 18:02:21.982362   70293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 18:02:21.982373   70293 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1025 18:02:21.982378   70293 command_runner.go:130] >            pods insecure
	I1025 18:02:21.982388   70293 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1025 18:02:21.982394   70293 command_runner.go:130] >            ttl 30
	I1025 18:02:21.982411   70293 command_runner.go:130] >         }
	I1025 18:02:21.982416   70293 command_runner.go:130] >         prometheus :9153
	I1025 18:02:21.982420   70293 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1025 18:02:21.982426   70293 command_runner.go:130] >            max_concurrent 1000
	I1025 18:02:21.982430   70293 command_runner.go:130] >         }
	I1025 18:02:21.982433   70293 command_runner.go:130] >         cache 30
	I1025 18:02:21.982437   70293 command_runner.go:130] >         loop
	I1025 18:02:21.982437   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:02:21.982440   70293 command_runner.go:130] >         reload
	I1025 18:02:21.982448   70293 command_runner.go:130] >         loadbalance
	I1025 18:02:21.982452   70293 command_runner.go:130] >     }
	I1025 18:02:21.982455   70293 command_runner.go:130] > kind: ConfigMap
	I1025 18:02:21.982464   70293 command_runner.go:130] > metadata:
	I1025 18:02:21.982472   70293 command_runner.go:130] >   creationTimestamp: "2023-10-26T01:02:09Z"
	I1025 18:02:21.982474   70293 addons.go:231] Setting addon default-storageclass=true in "multinode-971000"
	I1025 18:02:21.982477   70293 command_runner.go:130] >   name: coredns
	I1025 18:02:21.982483   70293 command_runner.go:130] >   namespace: kube-system
	I1025 18:02:21.982487   70293 command_runner.go:130] >   resourceVersion: "264"
	I1025 18:02:21.982491   70293 command_runner.go:130] >   uid: 2fc1cf57-eba4-447b-8e4e-de7a7b3ccd98
	I1025 18:02:21.982492   70293 host.go:66] Checking if "multinode-971000" exists ...
	I1025 18:02:21.982597   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:02:21.982668   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 18:02:21.983617   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:02:22.058087   70293 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 18:02:22.058117   70293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 18:02:22.058251   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:02:22.059220   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:02:22.059478   70293 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:02:22.059953   70293 kapi.go:59] client config for multinode-971000: &rest.Config{Host:"https://127.0.0.1:57083", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:02:22.060418   70293 node_ready.go:35] waiting up to 6m0s for node "multinode-971000" to be "Ready" ...
	I1025 18:02:22.060506   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.060520   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.060537   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.060549   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.065996   70293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 18:02:22.066023   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.066031   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.066036   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.066041   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.066046   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.066061   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.066074   70293 round_trippers.go:580]     Audit-Id: 9233847b-9bc3-40d1-9b18-a1b5e43dd4f8
	I1025 18:02:22.066974   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.068992   70293 node_ready.go:49] node "multinode-971000" has status "Ready":"True"
	I1025 18:02:22.069011   70293 node_ready.go:38] duration metric: took 8.559125ms waiting for node "multinode-971000" to be "Ready" ...
	I1025 18:02:22.069023   70293 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 18:02:22.069095   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:22.069104   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.069116   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.069127   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.074058   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:22.074089   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.074100   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.074123   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.074138   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.074152   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.074162   70293 round_trippers.go:580]     Audit-Id: 16698d60-13d6-49af-b488-f6acefe8d8ba
	I1025 18:02:22.074197   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.074651   70293 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"343"},"items":[{"metadata":{"name":"etcd-multinode-971000","namespace":"kube-system","uid":"686f24fe-a02b-4a6b-8790-b0d2628424c1","resourceVersion":"302","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.mirror":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.seen":"2023-10-26T01:02:09.640585120Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30360 chars]
	I1025 18:02:22.077517   70293 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:22.077580   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/etcd-multinode-971000
	I1025 18:02:22.077586   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.077593   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.077600   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.081207   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:22.081225   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.081231   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.081236   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.081241   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.081248   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.081254   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.081259   70293 round_trippers.go:580]     Audit-Id: 58177826-596f-47d1-9387-3e5833198f4c
	I1025 18:02:22.081343   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-971000","namespace":"kube-system","uid":"686f24fe-a02b-4a6b-8790-b0d2628424c1","resourceVersion":"302","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.mirror":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.seen":"2023-10-26T01:02:09.640585120Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6076 chars]
	I1025 18:02:22.081629   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.081637   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.081643   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.081649   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.118335   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:02:22.137962   70293 round_trippers.go:574] Response Status: 200 OK in 56 milliseconds
	I1025 18:02:22.137983   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.137994   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.138003   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.138013   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.138024   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.138034   70293 round_trippers.go:580]     Audit-Id: 20de7bf3-1b40-4e26-8864-6a9d34e9d689
	I1025 18:02:22.138045   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.138442   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.138806   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/etcd-multinode-971000
	I1025 18:02:22.138817   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.138826   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.138834   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.142474   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:22.142494   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.142506   70293 round_trippers.go:580]     Audit-Id: a9762499-44f4-4eda-8d1e-3dafd7cf8472
	I1025 18:02:22.142518   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.142529   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.142537   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.142545   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.142552   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.142953   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-971000","namespace":"kube-system","uid":"686f24fe-a02b-4a6b-8790-b0d2628424c1","resourceVersion":"302","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.mirror":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.seen":"2023-10-26T01:02:09.640585120Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6076 chars]
	I1025 18:02:22.143320   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.143335   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.143349   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.143367   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.146811   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:22.146853   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.146882   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.146899   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.146916   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.146930   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.146939   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.146949   70293 round_trippers.go:580]     Audit-Id: 23028079-bb8d-4b54-82b8-11095a681461
	I1025 18:02:22.147083   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.333640   70293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:02:22.534982   70293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 18:02:22.647641   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/etcd-multinode-971000
	I1025 18:02:22.647678   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.647693   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.647705   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.653351   70293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 18:02:22.653368   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.653375   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.653380   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.653385   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.653389   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.653394   70293 round_trippers.go:580]     Audit-Id: bd37ee79-d0c1-41c9-bfd0-38cd1b6b32cc
	I1025 18:02:22.653398   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.653866   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-971000","namespace":"kube-system","uid":"686f24fe-a02b-4a6b-8790-b0d2628424c1","resourceVersion":"353","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.mirror":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.seen":"2023-10-26T01:02:09.640585120Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I1025 18:02:22.654417   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.654427   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.654435   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.654440   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.684546   70293 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I1025 18:02:22.684565   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.684573   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.684584   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.684590   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.684598   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.684604   70293 round_trippers.go:580]     Audit-Id: 187bf39d-c6e1-43e2-9460-aaf18d1d3cb5
	I1025 18:02:22.684611   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.684729   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.684994   70293 pod_ready.go:92] pod "etcd-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:22.685005   70293 pod_ready.go:81] duration metric: took 607.45433ms waiting for pod "etcd-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:22.685014   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:22.685060   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-971000
	I1025 18:02:22.685066   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.685074   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.685081   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.736328   70293 round_trippers.go:574] Response Status: 200 OK in 51 milliseconds
	I1025 18:02:22.736351   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.736361   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.736379   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.736423   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.736438   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.736467   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.736486   70293 round_trippers.go:580]     Audit-Id: 9b20a9d3-1ec2-4946-9388-77ea201ec014
	I1025 18:02:22.737511   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-971000","namespace":"kube-system","uid":"b4400411-c3b7-408c-b79f-a2e005efbef3","resourceVersion":"378","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"3673709ea844b9ea542719bd93b9f9af","kubernetes.io/config.mirror":"3673709ea844b9ea542719bd93b9f9af","kubernetes.io/config.seen":"2023-10-26T01:02:09.640588239Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8238 chars]
	I1025 18:02:22.738043   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.738057   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.738069   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.738080   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.744311   70293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1025 18:02:22.744331   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.744350   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.744364   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.744369   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.744375   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.744384   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.744393   70293 round_trippers.go:580]     Audit-Id: 485be69f-3a99-4e9d-9ce4-772aec09365f
	I1025 18:02:22.744476   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.744808   70293 pod_ready.go:92] pod "kube-apiserver-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:22.744823   70293 pod_ready.go:81] duration metric: took 59.799765ms waiting for pod "kube-apiserver-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:22.744836   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:22.744890   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-971000
	I1025 18:02:22.744900   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.744909   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.744917   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.748766   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:22.748791   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.748803   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.748815   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.748825   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.748836   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.748844   70293 round_trippers.go:580]     Audit-Id: cb4d8def-ff6f-425a-b955-9c8331b59044
	I1025 18:02:22.748858   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.749045   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-971000","namespace":"kube-system","uid":"6347ae2f-f5d5-4533-8b15-4cb194fd7c75","resourceVersion":"301","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.mirror":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.seen":"2023-10-26T01:02:09.640589032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8075 chars]
	I1025 18:02:22.749502   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.749515   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.749525   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.749534   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.833561   70293 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I1025 18:02:22.833594   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.833609   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.833638   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.833658   70293 round_trippers.go:580]     Audit-Id: d9333d57-07a9-40ac-b5c1-0622417a631e
	I1025 18:02:22.833672   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.833690   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.833705   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.833864   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.834466   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-971000
	I1025 18:02:22.834484   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.834500   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.834520   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.840183   70293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 18:02:22.840202   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.840212   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.840220   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.840228   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.840248   70293 round_trippers.go:580]     Audit-Id: 6a5701f7-2719-43be-909b-cef485a2fdd7
	I1025 18:02:22.840260   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.840268   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.840450   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-971000","namespace":"kube-system","uid":"6347ae2f-f5d5-4533-8b15-4cb194fd7c75","resourceVersion":"301","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.mirror":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.seen":"2023-10-26T01:02:09.640589032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8075 chars]
	I1025 18:02:22.858949   70293 command_runner.go:130] > configmap/coredns replaced
	I1025 18:02:22.860611   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.860626   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.860648   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.860682   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.936768   70293 round_trippers.go:574] Response Status: 200 OK in 76 milliseconds
	I1025 18:02:22.936786   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.936795   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.936805   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.936815   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.936829   70293 round_trippers.go:580]     Audit-Id: b46bfb8b-524a-4fde-a75a-0af1ee668f77
	I1025 18:02:22.936838   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.936845   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.937030   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.940009   70293 start.go:926] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1025 18:02:23.437620   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-971000
	I1025 18:02:23.437640   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.437677   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.437689   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.441964   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:23.441981   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.441988   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.441999   70293 round_trippers.go:580]     Audit-Id: 9b9f17c4-4214-4775-9060-83de68c33eba
	I1025 18:02:23.442010   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.442019   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.442029   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.442053   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.442558   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-971000","namespace":"kube-system","uid":"6347ae2f-f5d5-4533-8b15-4cb194fd7c75","resourceVersion":"392","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.mirror":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.seen":"2023-10-26T01:02:09.640589032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7813 chars]
	I1025 18:02:23.443113   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:23.443128   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.443138   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.443147   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.447195   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:23.447215   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.447231   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.447242   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.447250   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.447257   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.447275   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.447283   70293 round_trippers.go:580]     Audit-Id: d117b489-82a7-4f16-a14f-26586d5b09b5
	I1025 18:02:23.447393   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:23.447766   70293 pod_ready.go:92] pod "kube-controller-manager-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:23.447777   70293 pod_ready.go:81] duration metric: took 702.913178ms waiting for pod "kube-controller-manager-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:23.447789   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:23.460911   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-971000
	I1025 18:02:23.460925   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.460934   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.460940   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.464911   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:23.464928   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.464945   70293 round_trippers.go:580]     Audit-Id: 245710a3-e747-4801-90b6-50eda51b536d
	I1025 18:02:23.464958   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.464965   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.464970   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.464974   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.464979   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.465095   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-971000","namespace":"kube-system","uid":"411ae656-7e8b-4e4e-892e-9873855be79f","resourceVersion":"304","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"666bb44a088f2de4036212af9c22245b","kubernetes.io/config.mirror":"666bb44a088f2de4036212af9c22245b","kubernetes.io/config.seen":"2023-10-26T01:02:09.640589778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I1025 18:02:23.532582   70293 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1025 18:02:23.538964   70293 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1025 18:02:23.551188   70293 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1025 18:02:23.561204   70293 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1025 18:02:23.638180   70293 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1025 18:02:23.652958   70293 command_runner.go:130] > pod/storage-provisioner created
	I1025 18:02:23.657512   70293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.32380028s)
	I1025 18:02:23.657549   70293 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1025 18:02:23.657634   70293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.122577117s)
	I1025 18:02:23.657761   70293 round_trippers.go:463] GET https://127.0.0.1:57083/apis/storage.k8s.io/v1/storageclasses
	I1025 18:02:23.657822   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.657838   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.657849   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.660662   70293 request.go:629] Waited for 195.192974ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:23.660712   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:23.660723   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.660734   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.660746   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.661354   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:23.661385   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.661401   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.661416   70293 round_trippers.go:580]     Content-Length: 1273
	I1025 18:02:23.661426   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.661433   70293 round_trippers.go:580]     Audit-Id: d4b92fc9-a46d-4b87-87a9-68969d6d0dd1
	I1025 18:02:23.661439   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.661444   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.661460   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.661893   70293 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"406"},"items":[{"metadata":{"name":"standard","uid":"f6c92594-2313-4046-87c3-7ae92ca50b39","resourceVersion":"394","creationTimestamp":"2023-10-26T01:02:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T01:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1025 18:02:23.662433   70293 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f6c92594-2313-4046-87c3-7ae92ca50b39","resourceVersion":"394","creationTimestamp":"2023-10-26T01:02:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T01:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 18:02:23.662488   70293 round_trippers.go:463] PUT https://127.0.0.1:57083/apis/storage.k8s.io/v1/storageclasses/standard
	I1025 18:02:23.662502   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.662514   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.662524   70293 round_trippers.go:473]     Content-Type: application/json
	I1025 18:02:23.662530   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.735715   70293 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I1025 18:02:23.735732   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.735738   70293 round_trippers.go:580]     Audit-Id: 441c0fd1-3d83-4767-b201-fc1c07681b7d
	I1025 18:02:23.735745   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.735752   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.735759   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.735766   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.735774   70293 round_trippers.go:580]     Content-Length: 1220
	I1025 18:02:23.735780   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.735844   70293 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f6c92594-2313-4046-87c3-7ae92ca50b39","resourceVersion":"394","creationTimestamp":"2023-10-26T01:02:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T01:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 18:02:23.736003   70293 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I1025 18:02:23.736018   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.736031   70293 round_trippers.go:580]     Audit-Id: 2e82dfc9-9eeb-47c6-8399-acb08dda3ca4
	I1025 18:02:23.736044   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.736052   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.736061   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.736071   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.798419   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.798393   70293 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 18:02:23.819389   70293 addons.go:502] enable addons completed in 1.996061365s: enabled=[storage-provisioner default-storageclass]
	I1025 18:02:23.798501   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:23.819798   70293 pod_ready.go:92] pod "kube-scheduler-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:23.819816   70293 pod_ready.go:81] duration metric: took 372.001138ms waiting for pod "kube-scheduler-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:23.819825   70293 pod_ready.go:38] duration metric: took 1.750736333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 18:02:23.819844   70293 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:02:23.819926   70293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:02:23.846724   70293 command_runner.go:130] > 2278
	I1025 18:02:23.847756   70293 api_server.go:72] duration metric: took 1.995007964s to wait for apiserver process to appear ...
	I1025 18:02:23.847774   70293 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:02:23.847798   70293 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57083/healthz ...
	I1025 18:02:23.854010   70293 api_server.go:279] https://127.0.0.1:57083/healthz returned 200:
	ok
	I1025 18:02:23.854060   70293 round_trippers.go:463] GET https://127.0.0.1:57083/version
	I1025 18:02:23.854065   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.854074   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.854081   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.855922   70293 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 18:02:23.855933   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.855939   70293 round_trippers.go:580]     Content-Length: 264
	I1025 18:02:23.855944   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.855949   70293 round_trippers.go:580]     Audit-Id: 3dc32f43-800a-42aa-bc98-3657c550e5af
	I1025 18:02:23.855954   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.855963   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.855968   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.855972   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.855984   70293 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1025 18:02:23.856032   70293 api_server.go:141] control plane version: v1.28.3
	I1025 18:02:23.856040   70293 api_server.go:131] duration metric: took 8.259025ms to wait for apiserver health ...
	I1025 18:02:23.856045   70293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 18:02:23.860697   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:23.860708   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.860715   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.860720   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.866071   70293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 18:02:23.866091   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.866101   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.866110   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.866118   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.866126   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.866134   70293 round_trippers.go:580]     Audit-Id: 8a389f18-f2ce-4ee5-b960-38ea89021abe
	I1025 18:02:23.866142   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.867414   70293 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-5dd5756b68-cvn82","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b00548f2-a206-488a-9e2b-45f2e1066597","resourceVersion":"387","creationTimestamp":"2023-10-26T01:02:22Z","deletionTimestamp":"2023-10-26T01:02:52Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0dc1c1d5-d0f7-41f7-962e-a321b5fe
4f6e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 61875 chars]
	I1025 18:02:23.870039   70293 system_pods.go:59] 9 kube-system pods found
	I1025 18:02:23.870069   70293 system_pods.go:61] "coredns-5dd5756b68-cvn82" [b00548f2-a206-488a-9e2b-45f2e1066597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:23.870079   70293 system_pods.go:61] "coredns-5dd5756b68-vm8jw" [8747ca8b-8044-46a8-a5bd-700e0fb6ceb8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:23.870084   70293 system_pods.go:61] "etcd-multinode-971000" [686f24fe-a02b-4a6b-8790-b0d2628424c1] Running
	I1025 18:02:23.870089   70293 system_pods.go:61] "kindnet-5txks" [5b661079-5482-4abd-8420-09db800cc9b5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 18:02:23.870094   70293 system_pods.go:61] "kube-apiserver-multinode-971000" [b4400411-c3b7-408c-b79f-a2e005efbef3] Running
	I1025 18:02:23.870098   70293 system_pods.go:61] "kube-controller-manager-multinode-971000" [6347ae2f-f5d5-4533-8b15-4cb194fd7c75] Running
	I1025 18:02:23.870103   70293 system_pods.go:61] "kube-proxy-2dzxx" [449549c6-a5cd-4468-b565-55811bb44448] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 18:02:23.870107   70293 system_pods.go:61] "kube-scheduler-multinode-971000" [411ae656-7e8b-4e4e-892e-9873855be79f] Running
	I1025 18:02:23.870112   70293 system_pods.go:61] "storage-provisioner" [8a6d679a-a32e-4707-ad40-063155cf0cde] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 18:02:23.870117   70293 system_pods.go:74] duration metric: took 14.067401ms to wait for pod list to return data ...
	I1025 18:02:23.870124   70293 default_sa.go:34] waiting for default service account to be created ...
	I1025 18:02:24.060795   70293 request.go:629] Waited for 190.605602ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/namespaces/default/serviceaccounts
	I1025 18:02:24.060889   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/default/serviceaccounts
	I1025 18:02:24.060942   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:24.060950   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:24.060956   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:24.064908   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:24.064930   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:24.064942   70293 round_trippers.go:580]     Content-Length: 261
	I1025 18:02:24.064953   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:24 GMT
	I1025 18:02:24.064961   70293 round_trippers.go:580]     Audit-Id: aa7323a0-427f-4da0-acdf-55724076bd00
	I1025 18:02:24.064970   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:24.064980   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:24.064991   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:24.065005   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:24.065041   70293 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b0c2b808-2d7c-4263-802e-9812df34c54c","resourceVersion":"328","creationTimestamp":"2023-10-26T01:02:21Z"}}]}
	I1025 18:02:24.065232   70293 default_sa.go:45] found service account: "default"
	I1025 18:02:24.065248   70293 default_sa.go:55] duration metric: took 195.11068ms for default service account to be created ...
	I1025 18:02:24.065260   70293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 18:02:24.260625   70293 request.go:629] Waited for 195.309388ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:24.260656   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:24.260662   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:24.260668   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:24.260674   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:24.264671   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:24.264683   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:24.264689   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:24 GMT
	I1025 18:02:24.264694   70293 round_trippers.go:580]     Audit-Id: b75fd28d-f95f-4dc6-a3cd-a23387c8cad6
	I1025 18:02:24.264699   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:24.264704   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:24.264708   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:24.264713   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:24.265928   70293 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"coredns-5dd5756b68-cvn82","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b00548f2-a206-488a-9e2b-45f2e1066597","resourceVersion":"387","creationTimestamp":"2023-10-26T01:02:22Z","deletionTimestamp":"2023-10-26T01:02:52Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0dc1c1d5-d0f7-41f7-962e-a321b5fe
4f6e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 61875 chars]
	I1025 18:02:24.267372   70293 system_pods.go:86] 9 kube-system pods found
	I1025 18:02:24.267386   70293 system_pods.go:89] "coredns-5dd5756b68-cvn82" [b00548f2-a206-488a-9e2b-45f2e1066597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:24.267392   70293 system_pods.go:89] "coredns-5dd5756b68-vm8jw" [8747ca8b-8044-46a8-a5bd-700e0fb6ceb8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:24.267398   70293 system_pods.go:89] "etcd-multinode-971000" [686f24fe-a02b-4a6b-8790-b0d2628424c1] Running
	I1025 18:02:24.267403   70293 system_pods.go:89] "kindnet-5txks" [5b661079-5482-4abd-8420-09db800cc9b5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 18:02:24.267407   70293 system_pods.go:89] "kube-apiserver-multinode-971000" [b4400411-c3b7-408c-b79f-a2e005efbef3] Running
	I1025 18:02:24.267428   70293 system_pods.go:89] "kube-controller-manager-multinode-971000" [6347ae2f-f5d5-4533-8b15-4cb194fd7c75] Running
	I1025 18:02:24.267440   70293 system_pods.go:89] "kube-proxy-2dzxx" [449549c6-a5cd-4468-b565-55811bb44448] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 18:02:24.267446   70293 system_pods.go:89] "kube-scheduler-multinode-971000" [411ae656-7e8b-4e4e-892e-9873855be79f] Running
	I1025 18:02:24.267451   70293 system_pods.go:89] "storage-provisioner" [8a6d679a-a32e-4707-ad40-063155cf0cde] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 18:02:24.267469   70293 retry.go:31] will retry after 286.432033ms: missing components: kube-dns, kube-proxy
	I1025 18:02:24.554155   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:24.554166   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:24.554173   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:24.554178   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:24.557769   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:24.557791   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:24.557810   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:24 GMT
	I1025 18:02:24.557822   70293 round_trippers.go:580]     Audit-Id: 5d07e617-d427-432f-bf20-ef648cd3219f
	I1025 18:02:24.557831   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:24.557836   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:24.557842   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:24.557846   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:24.558307   70293 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"coredns-5dd5756b68-cvn82","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b00548f2-a206-488a-9e2b-45f2e1066597","resourceVersion":"387","creationTimestamp":"2023-10-26T01:02:22Z","deletionTimestamp":"2023-10-26T01:02:52Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0dc1c1d5-d0f7-41f7-962e-a321b5fe
4f6e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 61875 chars]
	I1025 18:02:24.559752   70293 system_pods.go:86] 9 kube-system pods found
	I1025 18:02:24.559765   70293 system_pods.go:89] "coredns-5dd5756b68-cvn82" [b00548f2-a206-488a-9e2b-45f2e1066597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:24.559771   70293 system_pods.go:89] "coredns-5dd5756b68-vm8jw" [8747ca8b-8044-46a8-a5bd-700e0fb6ceb8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:24.559775   70293 system_pods.go:89] "etcd-multinode-971000" [686f24fe-a02b-4a6b-8790-b0d2628424c1] Running
	I1025 18:02:24.559799   70293 system_pods.go:89] "kindnet-5txks" [5b661079-5482-4abd-8420-09db800cc9b5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 18:02:24.559807   70293 system_pods.go:89] "kube-apiserver-multinode-971000" [b4400411-c3b7-408c-b79f-a2e005efbef3] Running
	I1025 18:02:24.559811   70293 system_pods.go:89] "kube-controller-manager-multinode-971000" [6347ae2f-f5d5-4533-8b15-4cb194fd7c75] Running
	I1025 18:02:24.559817   70293 system_pods.go:89] "kube-proxy-2dzxx" [449549c6-a5cd-4468-b565-55811bb44448] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 18:02:24.559821   70293 system_pods.go:89] "kube-scheduler-multinode-971000" [411ae656-7e8b-4e4e-892e-9873855be79f] Running
	I1025 18:02:24.559828   70293 system_pods.go:89] "storage-provisioner" [8a6d679a-a32e-4707-ad40-063155cf0cde] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 18:02:24.559838   70293 retry.go:31] will retry after 339.074022ms: missing components: kube-dns, kube-proxy
	I1025 18:02:24.899119   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:24.899144   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:24.899156   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:24.899166   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:24.904290   70293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 18:02:24.904302   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:24.904307   70293 round_trippers.go:580]     Audit-Id: ed49c1e9-65dc-45a2-8591-39897fc51024
	I1025 18:02:24.904312   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:24.904316   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:24.904321   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:24.904326   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:24.904333   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:24 GMT
	I1025 18:02:24.905385   70293 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vm8jw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8747ca8b-8044-46a8-a5bd-700e0fb6ceb8","resourceVersion":"419","creationTimestamp":"2023-10-26T01:02:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55197 chars]
	I1025 18:02:24.906624   70293 system_pods.go:86] 8 kube-system pods found
	I1025 18:02:24.906635   70293 system_pods.go:89] "coredns-5dd5756b68-vm8jw" [8747ca8b-8044-46a8-a5bd-700e0fb6ceb8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:24.906641   70293 system_pods.go:89] "etcd-multinode-971000" [686f24fe-a02b-4a6b-8790-b0d2628424c1] Running
	I1025 18:02:24.906646   70293 system_pods.go:89] "kindnet-5txks" [5b661079-5482-4abd-8420-09db800cc9b5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 18:02:24.906651   70293 system_pods.go:89] "kube-apiserver-multinode-971000" [b4400411-c3b7-408c-b79f-a2e005efbef3] Running
	I1025 18:02:24.906656   70293 system_pods.go:89] "kube-controller-manager-multinode-971000" [6347ae2f-f5d5-4533-8b15-4cb194fd7c75] Running
	I1025 18:02:24.906673   70293 system_pods.go:89] "kube-proxy-2dzxx" [449549c6-a5cd-4468-b565-55811bb44448] Running
	I1025 18:02:24.906684   70293 system_pods.go:89] "kube-scheduler-multinode-971000" [411ae656-7e8b-4e4e-892e-9873855be79f] Running
	I1025 18:02:24.906689   70293 system_pods.go:89] "storage-provisioner" [8a6d679a-a32e-4707-ad40-063155cf0cde] Running
	I1025 18:02:24.906700   70293 system_pods.go:126] duration metric: took 841.409472ms to wait for k8s-apps to be running ...
	I1025 18:02:24.906706   70293 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 18:02:24.906757   70293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:02:24.918193   70293 system_svc.go:56] duration metric: took 11.481929ms WaitForService to wait for kubelet.
	I1025 18:02:24.918206   70293 kubeadm.go:581] duration metric: took 3.065432195s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 18:02:24.918225   70293 node_conditions.go:102] verifying NodePressure condition ...
	I1025 18:02:24.918266   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes
	I1025 18:02:24.918271   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:24.918277   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:24.918283   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:24.920776   70293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 18:02:24.920793   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:24.920799   70293 round_trippers.go:580]     Audit-Id: bc96475b-c729-4d89-b157-a27a441dcac1
	I1025 18:02:24.920806   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:24.920812   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:24.920819   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:24.920826   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:24.920831   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:24 GMT
	I1025 18:02:24.920887   70293 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4840 chars]
	I1025 18:02:24.921102   70293 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:02:24.921115   70293 node_conditions.go:123] node cpu capacity is 12
	I1025 18:02:24.921126   70293 node_conditions.go:105] duration metric: took 2.896923ms to run NodePressure ...
	I1025 18:02:24.921133   70293 start.go:228] waiting for startup goroutines ...
	I1025 18:02:24.921138   70293 start.go:233] waiting for cluster config update ...
	I1025 18:02:24.921149   70293 start.go:242] writing updated cluster config ...
	I1025 18:02:24.944703   70293 out.go:177] 
	I1025 18:02:24.981892   70293 config.go:182] Loaded profile config "multinode-971000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:02:24.981983   70293 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/config.json ...
	I1025 18:02:25.004569   70293 out.go:177] * Starting worker node multinode-971000-m02 in cluster multinode-971000
	I1025 18:02:25.048634   70293 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 18:02:25.069534   70293 out.go:177] * Pulling base image ...
	I1025 18:02:25.111807   70293 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:02:25.111845   70293 cache.go:56] Caching tarball of preloaded images
	I1025 18:02:25.111899   70293 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 18:02:25.112046   70293 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 18:02:25.112068   70293 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 18:02:25.112166   70293 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/config.json ...
	I1025 18:02:25.165159   70293 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 18:02:25.165184   70293 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 18:02:25.165202   70293 cache.go:194] Successfully downloaded all kic artifacts
	I1025 18:02:25.165247   70293 start.go:365] acquiring machines lock for multinode-971000-m02: {Name:mk4eee4b27ca9a49e69024591cda98f7d3ec6bc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:02:25.165393   70293 start.go:369] acquired machines lock for "multinode-971000-m02" in 134.771µs
	I1025 18:02:25.165417   70293 start.go:93] Provisioning new machine with config: &{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 18:02:25.165492   70293 start.go:125] createHost starting for "m02" (driver="docker")
	I1025 18:02:25.188234   70293 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 18:02:25.188298   70293 start.go:159] libmachine.API.Create for "multinode-971000" (driver="docker")
	I1025 18:02:25.188312   70293 client.go:168] LocalClient.Create starting
	I1025 18:02:25.188396   70293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem
	I1025 18:02:25.188444   70293 main.go:141] libmachine: Decoding PEM data...
	I1025 18:02:25.188457   70293 main.go:141] libmachine: Parsing certificate...
	I1025 18:02:25.188506   70293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem
	I1025 18:02:25.188541   70293 main.go:141] libmachine: Decoding PEM data...
	I1025 18:02:25.188549   70293 main.go:141] libmachine: Parsing certificate...
	I1025 18:02:25.209386   70293 cli_runner.go:164] Run: docker network inspect multinode-971000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 18:02:25.309865   70293 network_create.go:77] Found existing network {name:multinode-971000 subnet:0xc003c69bf0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:65535}
	I1025 18:02:25.309914   70293 kic.go:118] calculated static IP "192.168.58.3" for the "multinode-971000-m02" container
	I1025 18:02:25.310034   70293 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 18:02:25.365203   70293 cli_runner.go:164] Run: docker volume create multinode-971000-m02 --label name.minikube.sigs.k8s.io=multinode-971000-m02 --label created_by.minikube.sigs.k8s.io=true
	I1025 18:02:25.423961   70293 oci.go:103] Successfully created a docker volume multinode-971000-m02
	I1025 18:02:25.424116   70293 cli_runner.go:164] Run: docker run --rm --name multinode-971000-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-971000-m02 --entrypoint /usr/bin/test -v multinode-971000-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 18:02:25.960659   70293 oci.go:107] Successfully prepared a docker volume multinode-971000-m02
	I1025 18:02:25.960695   70293 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:02:25.960707   70293 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 18:02:25.960858   70293 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-971000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 18:02:28.851198   70293 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-971000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (2.890182667s)
	I1025 18:02:28.851226   70293 kic.go:200] duration metric: took 2.890428 seconds to extract preloaded images to volume
	I1025 18:02:28.851341   70293 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 18:02:28.966671   70293 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-971000-m02 --name multinode-971000-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-971000-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-971000-m02 --network multinode-971000 --ip 192.168.58.3 --volume multinode-971000-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 18:02:29.292433   70293 cli_runner.go:164] Run: docker container inspect multinode-971000-m02 --format={{.State.Running}}
	I1025 18:02:29.358427   70293 cli_runner.go:164] Run: docker container inspect multinode-971000-m02 --format={{.State.Status}}
	I1025 18:02:29.424626   70293 cli_runner.go:164] Run: docker exec multinode-971000-m02 stat /var/lib/dpkg/alternatives/iptables
	I1025 18:02:29.545353   70293 oci.go:144] the created container "multinode-971000-m02" has a running status.
	I1025 18:02:29.545387   70293 kic.go:222] Creating ssh key for kic: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa...
	I1025 18:02:29.917368   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 18:02:29.917418   70293 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 18:02:29.989729   70293 cli_runner.go:164] Run: docker container inspect multinode-971000-m02 --format={{.State.Status}}
	I1025 18:02:30.055055   70293 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 18:02:30.055085   70293 kic_runner.go:114] Args: [docker exec --privileged multinode-971000-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 18:02:30.179108   70293 cli_runner.go:164] Run: docker container inspect multinode-971000-m02 --format={{.State.Status}}
	I1025 18:02:30.237150   70293 machine.go:88] provisioning docker machine ...
	I1025 18:02:30.237188   70293 ubuntu.go:169] provisioning hostname "multinode-971000-m02"
	I1025 18:02:30.237303   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:30.346663   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:02:30.347069   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57119 <nil> <nil>}
	I1025 18:02:30.347080   70293 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-971000-m02 && echo "multinode-971000-m02" | sudo tee /etc/hostname
	I1025 18:02:30.483876   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-971000-m02
	
	I1025 18:02:30.483988   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:30.540867   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:02:30.541238   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57119 <nil> <nil>}
	I1025 18:02:30.541273   70293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-971000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-971000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-971000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:02:30.666280   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:02:30.666337   70293 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 18:02:30.666349   70293 ubuntu.go:177] setting up certificates
	I1025 18:02:30.666360   70293 provision.go:83] configureAuth start
	I1025 18:02:30.666470   70293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000-m02
	I1025 18:02:30.725402   70293 provision.go:138] copyHostCerts
	I1025 18:02:30.725448   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:02:30.725504   70293 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 18:02:30.725510   70293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:02:30.725652   70293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 18:02:30.725864   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:02:30.725893   70293 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 18:02:30.725898   70293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:02:30.726014   70293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 18:02:30.726191   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:02:30.726228   70293 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 18:02:30.726234   70293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:02:30.726350   70293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 18:02:30.726536   70293 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.multinode-971000-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-971000-m02]
	I1025 18:02:31.282455   70293 provision.go:172] copyRemoteCerts
	I1025 18:02:31.282518   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:02:31.282573   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:31.339217   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57119 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa Username:docker}
	I1025 18:02:31.432391   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 18:02:31.432471   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 18:02:31.457280   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 18:02:31.457390   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1025 18:02:31.482258   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 18:02:31.482335   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 18:02:31.506755   70293 provision.go:86] duration metric: configureAuth took 840.360687ms
	I1025 18:02:31.506777   70293 ubuntu.go:193] setting minikube options for container-runtime
	I1025 18:02:31.506945   70293 config.go:182] Loaded profile config "multinode-971000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:02:31.507055   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:31.567137   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:02:31.567444   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57119 <nil> <nil>}
	I1025 18:02:31.567457   70293 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:02:31.692926   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 18:02:31.692944   70293 ubuntu.go:71] root file system type: overlay
	I1025 18:02:31.693094   70293 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:02:31.693197   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:31.753838   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:02:31.754239   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57119 <nil> <nil>}
	I1025 18:02:31.754300   70293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:02:31.889139   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:02:31.889244   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:31.947028   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:02:31.947369   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57119 <nil> <nil>}
	I1025 18:02:31.947386   70293 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:02:32.620831   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:02:31.886155258 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 18:02:32.620859   70293 machine.go:91] provisioned docker machine in 2.3836145s
	I1025 18:02:32.620867   70293 client.go:171] LocalClient.Create took 7.432327075s
	I1025 18:02:32.620887   70293 start.go:167] duration metric: libmachine.API.Create for "multinode-971000" took 7.432365189s
	I1025 18:02:32.620892   70293 start.go:300] post-start starting for "multinode-971000-m02" (driver="docker")
	I1025 18:02:32.620899   70293 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:02:32.620967   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:02:32.621055   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:32.681244   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57119 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa Username:docker}
	I1025 18:02:32.775550   70293 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:02:32.780672   70293 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1025 18:02:32.780682   70293 command_runner.go:130] > NAME="Ubuntu"
	I1025 18:02:32.780687   70293 command_runner.go:130] > VERSION_ID="22.04"
	I1025 18:02:32.780692   70293 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1025 18:02:32.780696   70293 command_runner.go:130] > VERSION_CODENAME=jammy
	I1025 18:02:32.780699   70293 command_runner.go:130] > ID=ubuntu
	I1025 18:02:32.780703   70293 command_runner.go:130] > ID_LIKE=debian
	I1025 18:02:32.780710   70293 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1025 18:02:32.780715   70293 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1025 18:02:32.780722   70293 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1025 18:02:32.780730   70293 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1025 18:02:32.780735   70293 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1025 18:02:32.780790   70293 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 18:02:32.780817   70293 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 18:02:32.780827   70293 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 18:02:32.780832   70293 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 18:02:32.780839   70293 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 18:02:32.780945   70293 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 18:02:32.781206   70293 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 18:02:32.781214   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /etc/ssl/certs/652922.pem
	I1025 18:02:32.781402   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:02:32.792039   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:02:32.817649   70293 start.go:303] post-start completed in 196.74227ms
	I1025 18:02:32.818255   70293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000-m02
	I1025 18:02:32.877324   70293 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/config.json ...
	I1025 18:02:32.877792   70293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 18:02:32.877859   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:32.938631   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57119 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa Username:docker}
	I1025 18:02:33.025895   70293 command_runner.go:130] > 7%!
	(MISSING)I1025 18:02:33.025989   70293 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 18:02:33.031599   70293 command_runner.go:130] > 91G
	I1025 18:02:33.031937   70293 start.go:128] duration metric: createHost completed in 7.866199714s
	I1025 18:02:33.031959   70293 start.go:83] releasing machines lock for "multinode-971000-m02", held for 7.866319112s
	I1025 18:02:33.032092   70293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000-m02
	I1025 18:02:33.123810   70293 out.go:177] * Found network options:
	I1025 18:02:33.165532   70293 out.go:177]   - NO_PROXY=192.168.58.2
	W1025 18:02:33.186619   70293 proxy.go:119] fail to check proxy env: Error ip not in block
	W1025 18:02:33.186656   70293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1025 18:02:33.186752   70293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 18:02:33.186765   70293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:02:33.186812   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:33.186838   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:33.252312   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57119 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa Username:docker}
	I1025 18:02:33.252778   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57119 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa Username:docker}
	I1025 18:02:33.445226   70293 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1025 18:02:33.447073   70293 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1025 18:02:33.447096   70293 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1025 18:02:33.447106   70293 command_runner.go:130] > Device: 10002bh/1048619d	Inode: 1048758     Links: 1
	I1025 18:02:33.447114   70293 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 18:02:33.447133   70293 command_runner.go:130] > Access: 2023-10-26 00:39:30.354217175 +0000
	I1025 18:02:33.447142   70293 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1025 18:02:33.447147   70293 command_runner.go:130] > Change: 2023-10-26 00:39:14.867105012 +0000
	I1025 18:02:33.447152   70293 command_runner.go:130] >  Birth: 2023-10-26 00:39:14.867105012 +0000
	I1025 18:02:33.447228   70293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 18:02:33.475270   70293 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 18:02:33.475367   70293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 18:02:33.504673   70293 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1025 18:02:33.504708   70293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1025 18:02:33.504718   70293 start.go:472] detecting cgroup driver to use...
	I1025 18:02:33.504736   70293 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:02:33.504814   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:02:33.524012   70293 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1025 18:02:33.524128   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 18:02:33.536192   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:02:33.548193   70293 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:02:33.548269   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:02:33.560171   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:02:33.572909   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:02:33.585237   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:02:33.597134   70293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:02:33.608204   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:02:33.619829   70293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:02:33.629721   70293 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1025 18:02:33.630500   70293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:02:33.641283   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:02:33.714029   70293 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:02:33.803883   70293 start.go:472] detecting cgroup driver to use...
	I1025 18:02:33.803907   70293 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:02:33.803984   70293 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:02:33.818151   70293 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1025 18:02:33.818202   70293 command_runner.go:130] > [Unit]
	I1025 18:02:33.818216   70293 command_runner.go:130] > Description=Docker Application Container Engine
	I1025 18:02:33.818225   70293 command_runner.go:130] > Documentation=https://docs.docker.com
	I1025 18:02:33.818234   70293 command_runner.go:130] > BindsTo=containerd.service
	I1025 18:02:33.818246   70293 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1025 18:02:33.818256   70293 command_runner.go:130] > Wants=network-online.target
	I1025 18:02:33.818264   70293 command_runner.go:130] > Requires=docker.socket
	I1025 18:02:33.818275   70293 command_runner.go:130] > StartLimitBurst=3
	I1025 18:02:33.818288   70293 command_runner.go:130] > StartLimitIntervalSec=60
	I1025 18:02:33.818308   70293 command_runner.go:130] > [Service]
	I1025 18:02:33.818319   70293 command_runner.go:130] > Type=notify
	I1025 18:02:33.818329   70293 command_runner.go:130] > Restart=on-failure
	I1025 18:02:33.818336   70293 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I1025 18:02:33.818351   70293 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1025 18:02:33.818364   70293 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1025 18:02:33.818374   70293 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1025 18:02:33.818383   70293 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1025 18:02:33.818394   70293 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1025 18:02:33.818404   70293 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1025 18:02:33.818415   70293 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1025 18:02:33.818442   70293 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1025 18:02:33.818458   70293 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1025 18:02:33.818466   70293 command_runner.go:130] > ExecStart=
	I1025 18:02:33.818488   70293 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1025 18:02:33.818503   70293 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1025 18:02:33.818515   70293 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1025 18:02:33.818525   70293 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1025 18:02:33.818541   70293 command_runner.go:130] > LimitNOFILE=infinity
	I1025 18:02:33.818554   70293 command_runner.go:130] > LimitNPROC=infinity
	I1025 18:02:33.818565   70293 command_runner.go:130] > LimitCORE=infinity
	I1025 18:02:33.818578   70293 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1025 18:02:33.818586   70293 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1025 18:02:33.818594   70293 command_runner.go:130] > TasksMax=infinity
	I1025 18:02:33.818605   70293 command_runner.go:130] > TimeoutStartSec=0
	I1025 18:02:33.818621   70293 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1025 18:02:33.818633   70293 command_runner.go:130] > Delegate=yes
	I1025 18:02:33.818647   70293 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1025 18:02:33.818652   70293 command_runner.go:130] > KillMode=process
	I1025 18:02:33.818658   70293 command_runner.go:130] > [Install]
	I1025 18:02:33.818665   70293 command_runner.go:130] > WantedBy=multi-user.target
	I1025 18:02:33.819864   70293 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 18:02:33.819981   70293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:02:33.836746   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:02:33.859185   70293 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 18:02:33.860666   70293 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:02:33.873678   70293 command_runner.go:130] > /usr/bin/cri-dockerd
	I1025 18:02:33.873806   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:02:33.887293   70293 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:02:33.909996   70293 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:02:34.013385   70293 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:02:34.109158   70293 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:02:34.109193   70293 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:02:34.130833   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:02:34.215763   70293 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:02:34.495652   70293 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:02:34.563759   70293 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I1025 18:02:34.563835   70293 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 18:02:34.634942   70293 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:02:34.703302   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:02:34.770025   70293 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 18:02:34.802619   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:02:34.870199   70293 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 18:02:34.968969   70293 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 18:02:34.969131   70293 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 18:02:34.975888   70293 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1025 18:02:34.975903   70293 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1025 18:02:34.975909   70293 command_runner.go:130] > Device: 100033h/1048627d	Inode: 267         Links: 1
	I1025 18:02:34.975916   70293 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1025 18:02:34.975924   70293 command_runner.go:130] > Access: 2023-10-26 01:02:34.881155431 +0000
	I1025 18:02:34.975930   70293 command_runner.go:130] > Modify: 2023-10-26 01:02:34.881155431 +0000
	I1025 18:02:34.975935   70293 command_runner.go:130] > Change: 2023-10-26 01:02:34.897155432 +0000
	I1025 18:02:34.975939   70293 command_runner.go:130] >  Birth: 2023-10-26 01:02:34.881155431 +0000
	I1025 18:02:34.975951   70293 start.go:540] Will wait 60s for crictl version
	I1025 18:02:34.976009   70293 ssh_runner.go:195] Run: which crictl
	I1025 18:02:34.981666   70293 command_runner.go:130] > /usr/bin/crictl
	I1025 18:02:34.981876   70293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 18:02:35.033300   70293 command_runner.go:130] > Version:  0.1.0
	I1025 18:02:35.033313   70293 command_runner.go:130] > RuntimeName:  docker
	I1025 18:02:35.033317   70293 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1025 18:02:35.033321   70293 command_runner.go:130] > RuntimeApiVersion:  v1
	I1025 18:02:35.035485   70293 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 18:02:35.035571   70293 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:02:35.064851   70293 command_runner.go:130] > 24.0.6
	I1025 18:02:35.066266   70293 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:02:35.094067   70293 command_runner.go:130] > 24.0.6
	I1025 18:02:35.115994   70293 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 18:02:35.159918   70293 out.go:177]   - env NO_PROXY=192.168.58.2
	I1025 18:02:35.180769   70293 cli_runner.go:164] Run: docker exec -t multinode-971000-m02 dig +short host.docker.internal
	I1025 18:02:35.315203   70293 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 18:02:35.315312   70293 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 18:02:35.321041   70293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:02:35.334465   70293 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000 for IP: 192.168.58.3
	I1025 18:02:35.334483   70293 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:02:35.334680   70293 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 18:02:35.334743   70293 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 18:02:35.334753   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 18:02:35.334775   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 18:02:35.334791   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 18:02:35.334812   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 18:02:35.334911   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 18:02:35.334977   70293 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 18:02:35.335005   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 18:02:35.335094   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 18:02:35.335184   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:02:35.335269   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 18:02:35.335383   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:02:35.335450   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem -> /usr/share/ca-certificates/65292.pem
	I1025 18:02:35.335483   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /usr/share/ca-certificates/652922.pem
	I1025 18:02:35.335528   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:35.335869   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:02:35.362135   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 18:02:35.388274   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:02:35.414748   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:02:35.440939   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 18:02:35.466113   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 18:02:35.492088   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:02:35.518217   70293 ssh_runner.go:195] Run: openssl version
	I1025 18:02:35.524493   70293 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1025 18:02:35.524763   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 18:02:35.537094   70293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 18:02:35.542243   70293 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:02:35.542269   70293 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:02:35.542325   70293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 18:02:35.550279   70293 command_runner.go:130] > 51391683
	I1025 18:02:35.550373   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 18:02:35.562541   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 18:02:35.574057   70293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 18:02:35.579313   70293 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:02:35.579331   70293 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:02:35.579396   70293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 18:02:35.587804   70293 command_runner.go:130] > 3ec20f2e
	I1025 18:02:35.587891   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:02:35.599703   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:02:35.611006   70293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:35.615982   70293 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:35.616014   70293 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:35.616076   70293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:35.624105   70293 command_runner.go:130] > b5213941
	I1025 18:02:35.624361   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:02:35.636806   70293 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 18:02:35.642063   70293 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 18:02:35.642087   70293 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 18:02:35.642206   70293 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:02:35.707300   70293 command_runner.go:130] > cgroupfs
	I1025 18:02:35.708700   70293 cni.go:84] Creating CNI manager for ""
	I1025 18:02:35.708711   70293 cni.go:136] 2 nodes found, recommending kindnet
	I1025 18:02:35.708722   70293 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 18:02:35.708737   70293 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-971000 NodeName:multinode-971000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 18:02:35.708844   70293 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-971000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:02:35.708888   70293 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-971000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 18:02:35.708955   70293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 18:02:35.719131   70293 command_runner.go:130] > kubeadm
	I1025 18:02:35.719167   70293 command_runner.go:130] > kubectl
	I1025 18:02:35.719177   70293 command_runner.go:130] > kubelet
	I1025 18:02:35.720024   70293 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:02:35.720096   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1025 18:02:35.731061   70293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1025 18:02:35.751271   70293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:02:35.772282   70293 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1025 18:02:35.777942   70293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:02:35.791282   70293 host.go:66] Checking if "multinode-971000" exists ...
	I1025 18:02:35.791464   70293 config.go:182] Loaded profile config "multinode-971000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:02:35.791488   70293 start.go:304] JoinCluster: &{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:02:35.791560   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1025 18:02:35.791622   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:02:35.850331   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:02:36.003294   70293 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token hw8z7i.u3ykeij10qe0tbqv --discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d 
	I1025 18:02:36.003330   70293 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 18:02:36.003349   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hw8z7i.u3ykeij10qe0tbqv --discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-971000-m02"
	I1025 18:02:36.043621   70293 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 18:02:36.192809   70293 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 18:02:36.192838   70293 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1025 18:02:36.226540   70293 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:02:36.226563   70293 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:02:36.226569   70293 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1025 18:02:36.308470   70293 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1025 18:02:37.823920   70293 command_runner.go:130] > This node has joined the cluster:
	I1025 18:02:37.823934   70293 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1025 18:02:37.823940   70293 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1025 18:02:37.823945   70293 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1025 18:02:37.826512   70293 command_runner.go:130] ! W1026 01:02:36.042439    1503 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 18:02:37.826524   70293 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 18:02:37.826542   70293 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:02:37.826552   70293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hw8z7i.u3ykeij10qe0tbqv --discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-971000-m02": (1.82314044s)
	I1025 18:02:37.826569   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1025 18:02:37.960538   70293 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1025 18:02:37.960559   70293 start.go:306] JoinCluster complete in 2.1690061s
	I1025 18:02:37.960570   70293 cni.go:84] Creating CNI manager for ""
	I1025 18:02:37.960578   70293 cni.go:136] 2 nodes found, recommending kindnet
	I1025 18:02:37.960664   70293 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 18:02:37.966120   70293 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1025 18:02:37.966137   70293 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1025 18:02:37.966145   70293 command_runner.go:130] > Device: a4h/164d	Inode: 1049408     Links: 1
	I1025 18:02:37.966157   70293 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 18:02:37.966179   70293 command_runner.go:130] > Access: 2023-10-26 00:39:30.623217190 +0000
	I1025 18:02:37.966188   70293 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1025 18:02:37.966196   70293 command_runner.go:130] > Change: 2023-10-26 00:39:15.549105052 +0000
	I1025 18:02:37.966208   70293 command_runner.go:130] >  Birth: 2023-10-26 00:39:15.509105049 +0000
	I1025 18:02:37.966250   70293 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1025 18:02:37.966256   70293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1025 18:02:37.986190   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 18:02:38.228114   70293 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1025 18:02:38.233031   70293 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1025 18:02:38.235932   70293 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1025 18:02:38.247397   70293 command_runner.go:130] > daemonset.apps/kindnet configured
	I1025 18:02:38.252480   70293 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:02:38.252743   70293 kapi.go:59] client config for multinode-971000: &rest.Config{Host:"https://127.0.0.1:57083", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:02:38.253193   70293 round_trippers.go:463] GET https://127.0.0.1:57083/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 18:02:38.253206   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.253216   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.253222   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.256153   70293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 18:02:38.256170   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.256180   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.256187   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.256198   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.256206   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.256213   70293 round_trippers.go:580]     Content-Length: 291
	I1025 18:02:38.256221   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.256232   70293 round_trippers.go:580]     Audit-Id: 5db82f93-1d7c-450b-8ca8-257f41c6259e
	I1025 18:02:38.256261   70293 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"929058e7-d591-423d-8b82-e048f4d0d834","resourceVersion":"485","creationTimestamp":"2023-10-26T01:02:09Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1025 18:02:38.256358   70293 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-971000" context rescaled to 1 replicas
	I1025 18:02:38.256381   70293 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 18:02:38.317325   70293 out.go:177] * Verifying Kubernetes components...
	I1025 18:02:38.338555   70293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:02:38.351792   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:02:38.415325   70293 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:02:38.415574   70293 kapi.go:59] client config for multinode-971000: &rest.Config{Host:"https://127.0.0.1:57083", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:02:38.415849   70293 node_ready.go:35] waiting up to 6m0s for node "multinode-971000-m02" to be "Ready" ...
	I1025 18:02:38.415907   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000-m02
	I1025 18:02:38.415913   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.415920   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.415932   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.420281   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:38.420299   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.420305   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.420310   70293 round_trippers.go:580]     Audit-Id: a3f928ea-b736-4382-989b-2d9c23cf87ab
	I1025 18:02:38.420315   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.420320   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.420325   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.420329   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.420423   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000-m02","uid":"7897eeaa-223d-4777-9f20-9231836b81c9","resourceVersion":"486","creationTimestamp":"2023-10-26T01:02:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:36Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4016 chars]
	I1025 18:02:38.420639   70293 node_ready.go:49] node "multinode-971000-m02" has status "Ready":"True"
	I1025 18:02:38.420648   70293 node_ready.go:38] duration metric: took 4.788532ms waiting for node "multinode-971000-m02" to be "Ready" ...
	I1025 18:02:38.420659   70293 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 18:02:38.420720   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:38.420727   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.420733   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.420739   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.425231   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:38.425255   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.425284   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.425292   70293 round_trippers.go:580]     Audit-Id: 5935a45b-df76-4d1f-a2f7-1878083de854
	I1025 18:02:38.425298   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.425303   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.425325   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.425331   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.426423   70293 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vm8jw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8747ca8b-8044-46a8-a5bd-700e0fb6ceb8","resourceVersion":"481","creationTimestamp":"2023-10-26T01:02:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68697 chars]
	I1025 18:02:38.428369   70293 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vm8jw" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.428421   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vm8jw
	I1025 18:02:38.428426   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.428434   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.428441   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.431720   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:38.431733   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.431739   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.431744   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.431749   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.431754   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.431761   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.431766   70293 round_trippers.go:580]     Audit-Id: 4a971ffe-20b5-4360-87ae-3c3dcaa3d8bc
	I1025 18:02:38.431837   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vm8jw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8747ca8b-8044-46a8-a5bd-700e0fb6ceb8","resourceVersion":"481","creationTimestamp":"2023-10-26T01:02:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1025 18:02:38.432133   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:38.432148   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.432162   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.432178   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.435356   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:38.435373   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.435380   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.435386   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.435391   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.435400   70293 round_trippers.go:580]     Audit-Id: c8291aa1-6017-403e-9904-4c8632fd5108
	I1025 18:02:38.435406   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.435412   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.435629   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:38.435867   70293 pod_ready.go:92] pod "coredns-5dd5756b68-vm8jw" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:38.435877   70293 pod_ready.go:81] duration metric: took 7.494994ms waiting for pod "coredns-5dd5756b68-vm8jw" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.435889   70293 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.435951   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/etcd-multinode-971000
	I1025 18:02:38.435964   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.435973   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.435982   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.439288   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:38.439302   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.439309   70293 round_trippers.go:580]     Audit-Id: 159df4b7-72f6-46b2-8a66-82f933870368
	I1025 18:02:38.439318   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.439326   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.439331   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.439336   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.439342   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.439431   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-971000","namespace":"kube-system","uid":"686f24fe-a02b-4a6b-8790-b0d2628424c1","resourceVersion":"353","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.mirror":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.seen":"2023-10-26T01:02:09.640585120Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I1025 18:02:38.439722   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:38.439730   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.439739   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.439747   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.443024   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:38.443036   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.443043   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.443049   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.443055   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.443060   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.443066   70293 round_trippers.go:580]     Audit-Id: 51f86fbc-549e-460f-bee3-a69df57041ec
	I1025 18:02:38.443072   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.443164   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:38.443437   70293 pod_ready.go:92] pod "etcd-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:38.443447   70293 pod_ready.go:81] duration metric: took 7.550706ms waiting for pod "etcd-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.443457   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.443500   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-971000
	I1025 18:02:38.443505   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.443511   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.443517   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.447120   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:38.447134   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.447142   70293 round_trippers.go:580]     Audit-Id: 3a646bc3-2ecb-4b9f-8b5b-dc28fc310542
	I1025 18:02:38.447151   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.447163   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.447176   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.447185   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.447193   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.447397   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-971000","namespace":"kube-system","uid":"b4400411-c3b7-408c-b79f-a2e005efbef3","resourceVersion":"378","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"3673709ea844b9ea542719bd93b9f9af","kubernetes.io/config.mirror":"3673709ea844b9ea542719bd93b9f9af","kubernetes.io/config.seen":"2023-10-26T01:02:09.640588239Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8238 chars]
	I1025 18:02:38.447710   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:38.447718   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.447726   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.447731   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.450636   70293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 18:02:38.450649   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.450655   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.450660   70293 round_trippers.go:580]     Audit-Id: 9098fdf8-88ca-46e3-8cb5-f48560dcf82d
	I1025 18:02:38.450665   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.450673   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.450679   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.450683   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.450748   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:38.450937   70293 pod_ready.go:92] pod "kube-apiserver-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:38.450946   70293 pod_ready.go:81] duration metric: took 7.481952ms waiting for pod "kube-apiserver-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.450954   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.450994   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-971000
	I1025 18:02:38.450999   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.451006   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.451011   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.453918   70293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 18:02:38.453929   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.453934   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.453939   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.453944   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.453949   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.453953   70293 round_trippers.go:580]     Audit-Id: 3b8ab7b4-be87-4eba-9f63-4d4149fdc7a1
	I1025 18:02:38.453959   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.454177   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-971000","namespace":"kube-system","uid":"6347ae2f-f5d5-4533-8b15-4cb194fd7c75","resourceVersion":"392","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.mirror":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.seen":"2023-10-26T01:02:09.640589032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7813 chars]
	I1025 18:02:38.454495   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:38.454507   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.454513   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.454519   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.457083   70293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 18:02:38.457095   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.457101   70293 round_trippers.go:580]     Audit-Id: 40cb8917-25dd-4174-b5a0-29b49fe2afdf
	I1025 18:02:38.457107   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.457112   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.457118   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.457122   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.457127   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.457184   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:38.457431   70293 pod_ready.go:92] pod "kube-controller-manager-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:38.457441   70293 pod_ready.go:81] duration metric: took 6.480707ms waiting for pod "kube-controller-manager-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.457454   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2dzxx" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.617314   70293 request.go:629] Waited for 159.763178ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-proxy-2dzxx
	I1025 18:02:38.617473   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-proxy-2dzxx
	I1025 18:02:38.617484   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.617502   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.617513   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.622186   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:38.622198   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.622204   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.622208   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.622213   70293 round_trippers.go:580]     Audit-Id: 45be6e02-588b-46e2-9c20-beb92129cb1e
	I1025 18:02:38.622219   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.622225   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.622229   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.622293   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2dzxx","generateName":"kube-proxy-","namespace":"kube-system","uid":"449549c6-a5cd-4468-b565-55811bb44448","resourceVersion":"421","creationTimestamp":"2023-10-26T01:02:22Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9df5e1d-1006-43e9-a993-70229a126a7e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9df5e1d-1006-43e9-a993-70229a126a7e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5528 chars]
	I1025 18:02:38.815996   70293 request.go:629] Waited for 193.435151ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:38.816032   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:38.816038   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.816047   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.816086   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.819185   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:38.819199   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.819207   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.819214   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.819219   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.819224   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.819228   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.819232   70293 round_trippers.go:580]     Audit-Id: f363f67c-bf71-43bd-b016-50d60173450c
	I1025 18:02:38.819301   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:38.819509   70293 pod_ready.go:92] pod "kube-proxy-2dzxx" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:38.819519   70293 pod_ready.go:81] duration metric: took 362.049067ms waiting for pod "kube-proxy-2dzxx" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.819525   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qbx49" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:39.016692   70293 request.go:629] Waited for 197.099423ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-proxy-qbx49
	I1025 18:02:39.016791   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-proxy-qbx49
	I1025 18:02:39.016802   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:39.016813   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:39.016824   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:39.020580   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:39.020591   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:39.020596   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:39 GMT
	I1025 18:02:39.020601   70293 round_trippers.go:580]     Audit-Id: 42c02b9a-9700-4c6a-9a4b-dc4ab5a93d5b
	I1025 18:02:39.020606   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:39.020611   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:39.020619   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:39.020624   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:39.020682   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qbx49","generateName":"kube-proxy-","namespace":"kube-system","uid":"0870cc92-6113-421d-9cd5-08a2ca23e892","resourceVersion":"494","creationTimestamp":"2023-10-26T01:02:36Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9df5e1d-1006-43e9-a993-70229a126a7e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9df5e1d-1006-43e9-a993-70229a126a7e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I1025 18:02:39.217468   70293 request.go:629] Waited for 196.458334ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/nodes/multinode-971000-m02
	I1025 18:02:39.217516   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000-m02
	I1025 18:02:39.217525   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:39.217538   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:39.217550   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:39.222023   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:39.222041   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:39.222047   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:39.222052   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:39 GMT
	I1025 18:02:39.222058   70293 round_trippers.go:580]     Audit-Id: f8e20068-77d6-4fe7-87f9-197461b739e5
	I1025 18:02:39.222062   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:39.222067   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:39.222074   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:39.222129   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000-m02","uid":"7897eeaa-223d-4777-9f20-9231836b81c9","resourceVersion":"486","creationTimestamp":"2023-10-26T01:02:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:36Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4016 chars]
	I1025 18:02:39.222306   70293 pod_ready.go:92] pod "kube-proxy-qbx49" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:39.222314   70293 pod_ready.go:81] duration metric: took 402.770963ms waiting for pod "kube-proxy-qbx49" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:39.222319   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:39.418003   70293 request.go:629] Waited for 195.635696ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-971000
	I1025 18:02:39.418116   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-971000
	I1025 18:02:39.418127   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:39.418138   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:39.418161   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:39.422724   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:39.422735   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:39.422744   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:39.422748   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:39 GMT
	I1025 18:02:39.422754   70293 round_trippers.go:580]     Audit-Id: 14bf5615-6cb4-462b-af3e-42204698d4f7
	I1025 18:02:39.422758   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:39.422763   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:39.422768   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:39.422855   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-971000","namespace":"kube-system","uid":"411ae656-7e8b-4e4e-892e-9873855be79f","resourceVersion":"304","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"666bb44a088f2de4036212af9c22245b","kubernetes.io/config.mirror":"666bb44a088f2de4036212af9c22245b","kubernetes.io/config.seen":"2023-10-26T01:02:09.640589778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I1025 18:02:39.617159   70293 request.go:629] Waited for 194.033629ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:39.617206   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:39.617214   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:39.617225   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:39.617245   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:39.621485   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:39.621496   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:39.621502   70293 round_trippers.go:580]     Audit-Id: 257e5928-735b-42a8-9b44-9c8eab2e5e7e
	I1025 18:02:39.621507   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:39.621512   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:39.621517   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:39.621521   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:39.621527   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:39 GMT
	I1025 18:02:39.621578   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:39.621770   70293 pod_ready.go:92] pod "kube-scheduler-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:39.621778   70293 pod_ready.go:81] duration metric: took 399.442528ms waiting for pod "kube-scheduler-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:39.621786   70293 pod_ready.go:38] duration metric: took 1.201078165s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 18:02:39.621798   70293 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 18:02:39.621853   70293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:02:39.633961   70293 system_svc.go:56] duration metric: took 12.158408ms WaitForService to wait for kubelet.
	I1025 18:02:39.633976   70293 kubeadm.go:581] duration metric: took 1.377520239s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 18:02:39.633991   70293 node_conditions.go:102] verifying NodePressure condition ...
	I1025 18:02:39.816808   70293 request.go:629] Waited for 182.752376ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/nodes
	I1025 18:02:39.816992   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes
	I1025 18:02:39.817005   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:39.817046   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:39.817074   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:39.821319   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:39.821330   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:39.821336   70293 round_trippers.go:580]     Audit-Id: 810c9e8f-2e12-4eb2-9102-a5a5617acf1e
	I1025 18:02:39.821340   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:39.821345   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:39.821354   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:39.821360   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:39.821364   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:39 GMT
	I1025 18:02:39.821451   70293 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"495"},"items":[{"metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9848 chars]
	I1025 18:02:39.821748   70293 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:02:39.821756   70293 node_conditions.go:123] node cpu capacity is 12
	I1025 18:02:39.821762   70293 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:02:39.821765   70293 node_conditions.go:123] node cpu capacity is 12
	I1025 18:02:39.821768   70293 node_conditions.go:105] duration metric: took 187.768133ms to run NodePressure ...
	I1025 18:02:39.821776   70293 start.go:228] waiting for startup goroutines ...
	I1025 18:02:39.821798   70293 start.go:242] writing updated cluster config ...
	I1025 18:02:39.822113   70293 ssh_runner.go:195] Run: rm -f paused
	I1025 18:02:39.867123   70293 start.go:600] kubectl: 1.27.2, cluster: 1.28.3 (minor skew: 1)
	I1025 18:02:39.909318   70293 out.go:177] * Done! kubectl is now configured to use "multinode-971000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Start docker client with request timeout 0s"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Loaded network plugin cni"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Docker cri networking managed by network plugin cni"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Docker Info: &{ID:f3d51850-6481-4bd0-a266-f12fa811602f Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:35 SystemTime:2023-10-26T01:01:58.954420413Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 KernelVersion:6.4.16-linuxkit OperatingSystem:Ubu
ntu 22.04.3 LTS OSVersion:22.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0006a61c0 NCPU:12 MemTotal:6227828736 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:multinode-971000 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense: De
faultAddressPools:[] Warnings:[]}"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Setting cgroupDriver cgroupfs"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 26 01:01:58 multinode-971000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Oct 26 01:02:04 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7ac42f2dcdad432eeb1e3756741a67375cb1d90e4816307794c7394b4e227576/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:04 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b9a8f2c969a514c208eb4209b96996da1f8b6058ab259c3f9567cd53b38a9bc9/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:04 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eeb64dfb3ded12aafcdf6082b1851fa87b041561ab92188a28179968aacbc81e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:04 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/279847d786425922090b228d6893db6ab1baeef70f9ca157677c80a5c8f13b48/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:23 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/53b2db21418a13bed7b201ee288a10c8cedf3987ab476aa1f2b977752337a6c5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:23 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5820da8798d68fc61909468c96e71830a39cc52b26a3e09f5d3dbe3f059f9ece/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:23 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/088e2f49585df3edcd504b3af3f4f591dfaca61ed0fcdce6b37853c9d6eb7c58/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:23 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:23Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-vm8jw_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Oct 26 01:02:24 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c78173724764d9514a55d77e0b4dabc16c5a8d6bb7b5f03ddb8c09abc4613ba6/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:24 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-vm8jw_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Oct 26 01:02:27 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:27Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20230809-80a64d96: Status: Downloaded newer image for kindest/kindnetd:v20230809-80a64d96"
	Oct 26 01:02:30 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:30Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Oct 26 01:02:36 multinode-971000 dockerd[1064]: time="2023-10-26T01:02:36.884209606Z" level=info msg="ignoring event" container=130896ac2d7b00a2517546fb70a32433c2451bd66c4491817c492c3542273ff8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 01:02:36 multinode-971000 dockerd[1064]: time="2023-10-26T01:02:36.964615012Z" level=info msg="ignoring event" container=5820da8798d68fc61909468c96e71830a39cc52b26a3e09f5d3dbe3f059f9ece module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 01:02:37 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1456a363c70fddfeaf26cd15a109fe25dd4a1bd1c81cb1e664c199ab513049b2/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	66e5de028885f       ead0a4a53df89                                                                              About a minute ago   Running             coredns                   1                   1456a363c70fd       coredns-5dd5756b68-vm8jw
	3e63c379e41a3       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052   About a minute ago   Running             kindnet-cni               0                   088e2f49585df       kindnet-5txks
	3a56b85798430       6e38f40d628db                                                                              About a minute ago   Running             storage-provisioner       0                   c78173724764d       storage-provisioner
	130896ac2d7b0       ead0a4a53df89                                                                              About a minute ago   Exited              coredns                   0                   5820da8798d68       coredns-5dd5756b68-vm8jw
	5c53a7a668f95       bfc896cf80fba                                                                              About a minute ago   Running             kube-proxy                0                   53b2db21418a1       kube-proxy-2dzxx
	30d1ff6804721       6d1b4fd1b182d                                                                              2 minutes ago        Running             kube-scheduler            0                   279847d786425       kube-scheduler-multinode-971000
	d8ee2e0d080d2       5374347291230                                                                              2 minutes ago        Running             kube-apiserver            0                   eeb64dfb3ded1       kube-apiserver-multinode-971000
	4b2e003897a9a       73deb9a3f7025                                                                              2 minutes ago        Running             etcd                      0                   7ac42f2dcdad4       etcd-multinode-971000
	d755633ba4432       10baa1ca17068                                                                              2 minutes ago        Running             kube-controller-manager   0                   b9a8f2c969a51       kube-controller-manager-multinode-971000
	
	* 
	* ==> coredns [130896ac2d7b] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 7231284757438160731.4679241085068702325. HINFO: dial udp 192.168.65.254:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 7231284757438160731.4679241085068702325. HINFO: dial udp 192.168.65.254:53: connect: network is unreachable
	
	* 
	* ==> coredns [66e5de028885] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37276 - 37745 "HINFO IN 1263933810490036710.2232372054731377199. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008400338s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-971000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-971000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=multinode-971000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T18_02_10_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Oct 2023 01:02:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-971000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 26 Oct 2023 01:04:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Oct 2023 01:02:40 +0000   Thu, 26 Oct 2023 01:02:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Oct 2023 01:02:40 +0000   Thu, 26 Oct 2023 01:02:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Oct 2023 01:02:40 +0000   Thu, 26 Oct 2023 01:02:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Oct 2023 01:02:40 +0000   Thu, 26 Oct 2023 01:02:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-971000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e7c45f7441348bea4dd9fd7902c5f60
	  System UUID:                5e7c45f7441348bea4dd9fd7902c5f60
	  Boot ID:                    97028b5e-c1fe-46d5-abb1-881a12fedf72
	  Kernel Version:             6.4.16-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-vm8jw                    100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     108s
	  kube-system                 etcd-multinode-971000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m1s
	  kube-system                 kindnet-5txks                               100m (0%!)(MISSING)     100m (0%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      108s
	  kube-system                 kube-apiserver-multinode-971000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kube-system                 kube-controller-manager-multinode-971000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kube-system                 kube-proxy-2dzxx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-scheduler-multinode-971000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (7%!)(MISSING)   100m (0%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node multinode-971000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node multinode-971000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m7s)  kubelet          Node multinode-971000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node multinode-971000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node multinode-971000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node multinode-971000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           109s                 node-controller  Node multinode-971000 event: Registered Node multinode-971000 in Controller
	
	
	Name:               multinode-971000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-971000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Oct 2023 01:02:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-971000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 26 Oct 2023 01:04:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Oct 2023 01:03:07 +0000   Thu, 26 Oct 2023 01:02:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Oct 2023 01:03:07 +0000   Thu, 26 Oct 2023 01:02:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Oct 2023 01:03:07 +0000   Thu, 26 Oct 2023 01:02:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Oct 2023 01:03:07 +0000   Thu, 26 Oct 2023 01:02:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-971000-m02
	Capacity:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	System Info:
	  Machine ID:                 5bcaa049ff3e4818bcbee689f3319ded
	  System UUID:                5bcaa049ff3e4818bcbee689f3319ded
	  Boot ID:                    97028b5e-c1fe-46d5-abb1-881a12fedf72
	  Kernel Version:             6.4.16-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2z4jl       100m (0%!)(MISSING)     100m (0%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      94s
	  kube-system                 kube-proxy-qbx49    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (0%!)(MISSING)  100m (0%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 92s                kube-proxy       
	  Normal  Starting                 94s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  94s (x2 over 94s)  kubelet          Node multinode-971000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s (x2 over 94s)  kubelet          Node multinode-971000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s (x2 over 94s)  kubelet          Node multinode-971000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                93s                kubelet          Node multinode-971000-m02 status is now: NodeReady
	  Normal  RegisteredNode           89s                node-controller  Node multinode-971000-m02 event: Registered Node multinode-971000-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.002920] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:07.0: PCI INT A: no GSI
	[  +0.002075] virtio-pci 0000:00:08.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:08.0: PCI INT A: no GSI
	[  +0.004650] virtio-pci 0000:00:09.0: can't derive routing for PCI INT A
	[  +0.000002] virtio-pci 0000:00:09.0: PCI INT A: no GSI
	[  +0.005011] virtio-pci 0000:00:0a.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0a.0: PCI INT A: no GSI
	[  +0.001909] virtio-pci 0000:00:0b.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0b.0: PCI INT A: no GSI
	[  +0.005014] virtio-pci 0000:00:0c.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0c.0: PCI INT A: no GSI
	[  +0.000255] virtio-pci 0000:00:0d.0: can't derive routing for PCI INT A
	[  +0.000000] virtio-pci 0000:00:0d.0: PCI INT A: no GSI
	[  +0.003210] virtio-pci 0000:00:0e.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0e.0: PCI INT A: no GSI
	[  +0.007936] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
	[  +0.025214] lpc_ich 0000:00:1f.0: No MFD cells added
	[  +0.006812] fail to initialize ptp_kvm
	[  +0.000001] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.756658] netlink: 'rc.init': attribute type 22 has an invalid length.
	[  +0.007092] 3[378]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.199399] FAT-fs (loop0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
	[  +0.000376] FAT-fs (loop0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
	[  +0.016213] grpcfuse: loading out-of-tree module taints kernel.
	
	* 
	* ==> etcd [4b2e003897a9] <==
	* {"level":"info","ts":"2023-10-26T01:02:04.535395Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-10-26T01:02:04.536334Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-26T01:02:04.53654Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-26T01:02:04.536619Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-26T01:02:04.536567Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-26T01:02:04.536638Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-26T01:02:05.058367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-26T01:02:05.058488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-26T01:02:05.058499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-10-26T01:02:05.058507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-10-26T01:02:05.058511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-26T01:02:05.058516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-10-26T01:02:05.058521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-26T01:02:05.059657Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:02:05.060254Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-971000 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-26T01:02:05.060331Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T01:02:05.060496Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:02:05.060588Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:02:05.060603Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:02:05.060702Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T01:02:05.060776Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-26T01:02:05.060787Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-26T01:02:05.06123Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-26T01:02:05.062073Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-10-26T01:02:28.79151Z","caller":"traceutil/trace.go:171","msg":"trace[1546140171] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"124.845392ms","start":"2023-10-26T01:02:28.666654Z","end":"2023-10-26T01:02:28.791499Z","steps":["trace[1546140171] 'process raft request'  (duration: 124.685634ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:04:10 up 26 min,  0 users,  load average: 0.34, 0.67, 0.53
	Linux multinode-971000 6.4.16-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Oct 10 20:42:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [3e63c379e41a] <==
	* I1026 01:03:08.864450       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	I1026 01:03:18.869192       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:03:18.869228       1 main.go:227] handling current node
	I1026 01:03:18.869236       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:03:18.869240       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	I1026 01:03:28.882106       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:03:28.882144       1 main.go:227] handling current node
	I1026 01:03:28.882151       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:03:28.882155       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	I1026 01:03:38.895411       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:03:38.895470       1 main.go:227] handling current node
	I1026 01:03:38.934157       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:03:38.934202       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	I1026 01:03:48.945857       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:03:48.945894       1 main.go:227] handling current node
	I1026 01:03:48.945902       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:03:48.945906       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	I1026 01:03:58.952508       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:03:58.952748       1 main.go:227] handling current node
	I1026 01:03:58.952757       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:03:58.952761       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	I1026 01:04:08.959689       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:04:08.959737       1 main.go:227] handling current node
	I1026 01:04:08.959747       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:04:08.959752       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [d8ee2e0d080d] <==
	* I1026 01:02:06.667595       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 01:02:06.667600       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 01:02:06.667603       1 cache.go:39] Caches are synced for autoregister controller
	I1026 01:02:06.731440       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 01:02:06.731495       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 01:02:06.734054       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 01:02:06.734671       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 01:02:06.734906       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 01:02:06.736126       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 01:02:06.830954       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 01:02:07.572411       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 01:02:07.575613       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 01:02:07.575653       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 01:02:07.932220       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 01:02:07.965074       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 01:02:08.043435       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 01:02:08.048235       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1026 01:02:08.049089       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 01:02:08.054001       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 01:02:08.647392       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 01:02:09.537608       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 01:02:09.548531       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 01:02:09.556011       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1026 01:02:22.138953       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1026 01:02:22.339987       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [d755633ba443] <==
	* I1026 01:02:22.351811       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2dzxx"
	I1026 01:02:22.352459       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5txks"
	I1026 01:02:22.444508       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-cvn82"
	I1026 01:02:22.450977       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vm8jw"
	I1026 01:02:22.547506       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="403.514449ms"
	I1026 01:02:22.552676       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-cvn82"
	I1026 01:02:22.636125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.565687ms"
	I1026 01:02:22.644234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.027134ms"
	I1026 01:02:22.644378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.693µs"
	I1026 01:02:22.649665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.972µs"
	I1026 01:02:22.742607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.122µs"
	I1026 01:02:24.599987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.21µs"
	I1026 01:02:24.617497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.405µs"
	I1026 01:02:24.621964       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="222.101µs"
	I1026 01:02:24.623683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.096µs"
	I1026 01:02:36.782223       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-971000-m02\" does not exist"
	I1026 01:02:36.789359       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-971000-m02" podCIDRs=["10.244.1.0/24"]
	I1026 01:02:36.793648       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2z4jl"
	I1026 01:02:36.795893       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qbx49"
	I1026 01:02:37.132496       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-971000-m02"
	I1026 01:02:37.740130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.148µs"
	I1026 01:02:37.755415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.493171ms"
	I1026 01:02:37.755569       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.192µs"
	I1026 01:02:41.489426       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-971000-m02"
	I1026 01:02:41.489509       1 event.go:307] "Event occurred" object="multinode-971000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-971000-m02 event: Registered Node multinode-971000-m02 in Controller"
	
	* 
	* ==> kube-proxy [5c53a7a668f9] <==
	* I1026 01:02:23.748003       1 server_others.go:69] "Using iptables proxy"
	I1026 01:02:23.832149       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1026 01:02:23.859791       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 01:02:23.862765       1 server_others.go:152] "Using iptables Proxier"
	I1026 01:02:23.862902       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 01:02:23.862916       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 01:02:23.862945       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 01:02:23.864454       1 server.go:846] "Version info" version="v1.28.3"
	I1026 01:02:23.864508       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 01:02:23.866632       1 config.go:188] "Starting service config controller"
	I1026 01:02:23.866663       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 01:02:23.866675       1 config.go:97] "Starting endpoint slice config controller"
	I1026 01:02:23.866688       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 01:02:23.866811       1 config.go:315] "Starting node config controller"
	I1026 01:02:23.866819       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 01:02:23.967434       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 01:02:23.967496       1 shared_informer.go:318] Caches are synced for service config
	I1026 01:02:23.967513       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [30d1ff680472] <==
	* E1026 01:02:06.652586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 01:02:06.652586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1026 01:02:06.652963       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1026 01:02:06.653022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 01:02:06.653030       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1026 01:02:06.653034       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1026 01:02:06.730202       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1026 01:02:06.730419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1026 01:02:06.735298       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 01:02:06.735430       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1026 01:02:06.735613       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 01:02:06.735755       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1026 01:02:06.735463       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 01:02:06.735948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1026 01:02:06.735339       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 01:02:06.736002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1026 01:02:07.635943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 01:02:07.635973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1026 01:02:07.692474       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1026 01:02:07.692533       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 01:02:07.773403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 01:02:07.773450       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1026 01:02:07.775577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1026 01:02:07.775615       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1026 01:02:09.349385       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 26 01:02:22 multinode-971000 kubelet[2474]: I1026 01:02:22.544989    2474 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8stq\" (UniqueName: \"kubernetes.io/projected/8747ca8b-8044-46a8-a5bd-700e0fb6ceb8-kube-api-access-b8stq\") pod \"coredns-5dd5756b68-vm8jw\" (UID: \"8747ca8b-8044-46a8-a5bd-700e0fb6ceb8\") " pod="kube-system/coredns-5dd5756b68-vm8jw"
	Oct 26 01:02:22 multinode-971000 kubelet[2474]: I1026 01:02:22.545092    2474 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b00548f2-a206-488a-9e2b-45f2e1066597-config-volume\") pod \"coredns-5dd5756b68-cvn82\" (UID: \"b00548f2-a206-488a-9e2b-45f2e1066597\") " pod="kube-system/coredns-5dd5756b68-cvn82"
	Oct 26 01:02:22 multinode-971000 kubelet[2474]: I1026 01:02:22.545367    2474 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbtqv\" (UniqueName: \"kubernetes.io/projected/b00548f2-a206-488a-9e2b-45f2e1066597-kube-api-access-vbtqv\") pod \"coredns-5dd5756b68-cvn82\" (UID: \"b00548f2-a206-488a-9e2b-45f2e1066597\") " pod="kube-system/coredns-5dd5756b68-cvn82"
	Oct 26 01:02:22 multinode-971000 kubelet[2474]: I1026 01:02:22.545425    2474 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8747ca8b-8044-46a8-a5bd-700e0fb6ceb8-config-volume\") pod \"coredns-5dd5756b68-vm8jw\" (UID: \"8747ca8b-8044-46a8-a5bd-700e0fb6ceb8\") " pod="kube-system/coredns-5dd5756b68-vm8jw"
	Oct 26 01:02:22 multinode-971000 kubelet[2474]: E1026 01:02:22.553589    2474 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[config-volume kube-api-access-vbtqv], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/coredns-5dd5756b68-cvn82" podUID="b00548f2-a206-488a-9e2b-45f2e1066597"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.532489    2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5820da8798d68fc61909468c96e71830a39cc52b26a3e09f5d3dbe3f059f9ece"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.537667    2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="088e2f49585df3edcd504b3af3f4f591dfaca61ed0fcdce6b37853c9d6eb7c58"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.562159    2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53b2db21418a13bed7b201ee288a10c8cedf3987ab476aa1f2b977752337a6c5"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.655597    2474 topology_manager.go:215] "Topology Admit Handler" podUID="8a6d679a-a32e-4707-ad40-063155cf0cde" podNamespace="kube-system" podName="storage-provisioner"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.657047    2474 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b00548f2-a206-488a-9e2b-45f2e1066597-config-volume\") pod \"b00548f2-a206-488a-9e2b-45f2e1066597\" (UID: \"b00548f2-a206-488a-9e2b-45f2e1066597\") "
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.657172    2474 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbtqv\" (UniqueName: \"kubernetes.io/projected/b00548f2-a206-488a-9e2b-45f2e1066597-kube-api-access-vbtqv\") pod \"b00548f2-a206-488a-9e2b-45f2e1066597\" (UID: \"b00548f2-a206-488a-9e2b-45f2e1066597\") "
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.658164    2474 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b00548f2-a206-488a-9e2b-45f2e1066597-config-volume" (OuterVolumeSpecName: "config-volume") pod "b00548f2-a206-488a-9e2b-45f2e1066597" (UID: "b00548f2-a206-488a-9e2b-45f2e1066597"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.662384    2474 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b00548f2-a206-488a-9e2b-45f2e1066597-kube-api-access-vbtqv" (OuterVolumeSpecName: "kube-api-access-vbtqv") pod "b00548f2-a206-488a-9e2b-45f2e1066597" (UID: "b00548f2-a206-488a-9e2b-45f2e1066597"). InnerVolumeSpecName "kube-api-access-vbtqv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.757379    2474 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8a6d679a-a32e-4707-ad40-063155cf0cde-tmp\") pod \"storage-provisioner\" (UID: \"8a6d679a-a32e-4707-ad40-063155cf0cde\") " pod="kube-system/storage-provisioner"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.757432    2474 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jdcn\" (UniqueName: \"kubernetes.io/projected/8a6d679a-a32e-4707-ad40-063155cf0cde-kube-api-access-8jdcn\") pod \"storage-provisioner\" (UID: \"8a6d679a-a32e-4707-ad40-063155cf0cde\") " pod="kube-system/storage-provisioner"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.757457    2474 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b00548f2-a206-488a-9e2b-45f2e1066597-config-volume\") on node \"multinode-971000\" DevicePath \"\""
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.757466    2474 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vbtqv\" (UniqueName: \"kubernetes.io/projected/b00548f2-a206-488a-9e2b-45f2e1066597-kube-api-access-vbtqv\") on node \"multinode-971000\" DevicePath \"\""
	Oct 26 01:02:24 multinode-971000 kubelet[2474]: I1026 01:02:24.579290    2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.579264685 podCreationTimestamp="2023-10-26 01:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-26 01:02:24.579090341 +0000 UTC m=+15.072789446" watchObservedRunningTime="2023-10-26 01:02:24.579264685 +0000 UTC m=+15.072963785"
	Oct 26 01:02:24 multinode-971000 kubelet[2474]: I1026 01:02:24.600086    2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vm8jw" podStartSLOduration=2.600036384 podCreationTimestamp="2023-10-26 01:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-26 01:02:24.599486756 +0000 UTC m=+15.093185861" watchObservedRunningTime="2023-10-26 01:02:24.600036384 +0000 UTC m=+15.093735493"
	Oct 26 01:02:24 multinode-971000 kubelet[2474]: I1026 01:02:24.609787    2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2dzxx" podStartSLOduration=2.609758703 podCreationTimestamp="2023-10-26 01:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-26 01:02:24.609591148 +0000 UTC m=+15.103290254" watchObservedRunningTime="2023-10-26 01:02:24.609758703 +0000 UTC m=+15.103457808"
	Oct 26 01:02:25 multinode-971000 kubelet[2474]: I1026 01:02:25.671706    2474 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b00548f2-a206-488a-9e2b-45f2e1066597" path="/var/lib/kubelet/pods/b00548f2-a206-488a-9e2b-45f2e1066597/volumes"
	Oct 26 01:02:30 multinode-971000 kubelet[2474]: I1026 01:02:30.070510    2474 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 26 01:02:30 multinode-971000 kubelet[2474]: I1026 01:02:30.071355    2474 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 01:02:37 multinode-971000 kubelet[2474]: I1026 01:02:37.731281    2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5820da8798d68fc61909468c96e71830a39cc52b26a3e09f5d3dbe3f059f9ece"
	Oct 26 01:02:37 multinode-971000 kubelet[2474]: I1026 01:02:37.740313    2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-5txks" podStartSLOduration=11.602141 podCreationTimestamp="2023-10-26 01:02:22 +0000 UTC" firstStartedPulling="2023-10-26 01:02:23.537351308 +0000 UTC m=+14.031050404" lastFinishedPulling="2023-10-26 01:02:27.676459274 +0000 UTC m=+18.169193335" observedRunningTime="2023-10-26 01:02:28.792995481 +0000 UTC m=+19.285729539" watchObservedRunningTime="2023-10-26 01:02:37.740283931 +0000 UTC m=+28.233017993"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-971000 -n multinode-971000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-971000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (91.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-971000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.21156ms)

                                                
                                                
** stderr ** 
	Error running /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: fork/exec /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl: exec format error

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-971000
helpers_test.go:235: (dbg) docker inspect multinode-971000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28ed23c726e29cfb999cf2223ecb2dcd787dc53207800cfd930b06cb48193932",
	        "Created": "2023-10-26T01:01:54.157875975Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 105004,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:01:54.3860038Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/28ed23c726e29cfb999cf2223ecb2dcd787dc53207800cfd930b06cb48193932/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28ed23c726e29cfb999cf2223ecb2dcd787dc53207800cfd930b06cb48193932/hostname",
	        "HostsPath": "/var/lib/docker/containers/28ed23c726e29cfb999cf2223ecb2dcd787dc53207800cfd930b06cb48193932/hosts",
	        "LogPath": "/var/lib/docker/containers/28ed23c726e29cfb999cf2223ecb2dcd787dc53207800cfd930b06cb48193932/28ed23c726e29cfb999cf2223ecb2dcd787dc53207800cfd930b06cb48193932-json.log",
	        "Name": "/multinode-971000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-971000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-971000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b762f3f580f129f804dc6a2edaf0f83875285fdf861fdfa66b8e013332791b02-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b762f3f580f129f804dc6a2edaf0f83875285fdf861fdfa66b8e013332791b02/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b762f3f580f129f804dc6a2edaf0f83875285fdf861fdfa66b8e013332791b02/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b762f3f580f129f804dc6a2edaf0f83875285fdf861fdfa66b8e013332791b02/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-971000",
	                "Source": "/var/lib/docker/volumes/multinode-971000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-971000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-971000",
	                "name.minikube.sigs.k8s.io": "multinode-971000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d47fbd75dccf7d881c8249d14e2d98d0305d3a03187922e80cee12c0b3675f3d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57079"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57080"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57081"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57082"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57083"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d47fbd75dccf",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-971000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "28ed23c726e2",
	                        "multinode-971000"
	                    ],
	                    "NetworkID": "57776fd0c26f2b12bbfd7c05969e8e301d089428ef095f98a16fe04bd9335135",
	                    "EndpointID": "5687b7332f76a8cd6692e1de5b9a5007f38379ff68b6d46dfe1e94bdef9452e3",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-971000 -n multinode-971000
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-971000 logs -n 25: (2.450585403s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-049000 ssh -- ls                    | mount-start-2-049000 | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:01 PDT |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-049000                           | mount-start-2-049000 | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:01 PDT |
	| start   | -p mount-start-2-049000                           | mount-start-2-049000 | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:01 PDT |
	| ssh     | mount-start-2-049000 ssh -- ls                    | mount-start-2-049000 | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:01 PDT |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-049000                           | mount-start-2-049000 | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:01 PDT |
	| delete  | -p mount-start-1-034000                           | mount-start-1-034000 | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:01 PDT |
	| start   | -p multinode-971000                               | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:01 PDT | 25 Oct 23 18:02 PDT |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- apply -f                   | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- rollout                    | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:02 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:03 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:03 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:03 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:03 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:04 PDT |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:04 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- exec                       | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:04 PDT |                     |
	|         | -- nslookup kubernetes.io                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- exec                       | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:04 PDT |                     |
	|         | -- nslookup kubernetes.default                    |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000                               | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:04 PDT |                     |
	|         | -- exec  -- nslookup                              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-971000 -- get pods -o                | multinode-971000     | jenkins | v1.31.2 | 25 Oct 23 18:04 PDT |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 18:01:49
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 18:01:49.498888   70293 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:01:49.499167   70293 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:01:49.499173   70293 out.go:309] Setting ErrFile to fd 2...
	I1025 18:01:49.499177   70293 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:01:49.499348   70293 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 18:01:49.500744   70293 out.go:303] Setting JSON to false
	I1025 18:01:49.522571   70293 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":32477,"bootTime":1698249632,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 18:01:49.522684   70293 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 18:01:49.544288   70293 out.go:177] * [multinode-971000] minikube v1.31.2 on Darwin 14.0
	I1025 18:01:49.588058   70293 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 18:01:49.588138   70293 notify.go:220] Checking for updates...
	I1025 18:01:49.632070   70293 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:01:49.674869   70293 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 18:01:49.696126   70293 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:01:49.717947   70293 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 18:01:49.739211   70293 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:01:49.761353   70293 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 18:01:49.819296   70293 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 18:01:49.819455   70293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:01:49.922795   70293 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-26 01:01:49.910840689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:01:49.965276   70293 out.go:177] * Using the docker driver based on user configuration
	I1025 18:01:49.987014   70293 start.go:298] selected driver: docker
	I1025 18:01:49.987040   70293 start.go:902] validating driver "docker" against <nil>
	I1025 18:01:49.987056   70293 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:01:49.991115   70293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:01:50.092309   70293 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-26 01:01:50.08116753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfine
d name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages
Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sco
ut Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:01:50.092501   70293 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 18:01:50.092690   70293 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:01:50.115236   70293 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 18:01:50.136038   70293 cni.go:84] Creating CNI manager for ""
	I1025 18:01:50.136068   70293 cni.go:136] 0 nodes found, recommending kindnet
	I1025 18:01:50.136082   70293 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 18:01:50.136104   70293 start_flags.go:323] config:
	{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:01:50.180213   70293 out.go:177] * Starting control plane node multinode-971000 in cluster multinode-971000
	I1025 18:01:50.202306   70293 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 18:01:50.224198   70293 out.go:177] * Pulling base image ...
	I1025 18:01:50.268552   70293 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:01:50.268618   70293 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 18:01:50.268647   70293 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 18:01:50.268663   70293 cache.go:56] Caching tarball of preloaded images
	I1025 18:01:50.268853   70293 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 18:01:50.268873   70293 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 18:01:50.270544   70293 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/config.json ...
	I1025 18:01:50.270652   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/config.json: {Name:mk1243f5af0e9ee909e7b7748d23b2f2b24a7412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:50.320506   70293 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 18:01:50.320523   70293 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 18:01:50.320548   70293 cache.go:194] Successfully downloaded all kic artifacts
	I1025 18:01:50.320594   70293 start.go:365] acquiring machines lock for multinode-971000: {Name:mk01e6cc063ed20be62de6672a43541267a64e02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:01:50.320762   70293 start.go:369] acquired machines lock for "multinode-971000" in 152.785µs
	I1025 18:01:50.320790   70293 start.go:93] Provisioning new machine with config: &{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:01:50.320876   70293 start.go:125] createHost starting for "" (driver="docker")
	I1025 18:01:50.347347   70293 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 18:01:50.347730   70293 start.go:159] libmachine.API.Create for "multinode-971000" (driver="docker")
	I1025 18:01:50.347820   70293 client.go:168] LocalClient.Create starting
	I1025 18:01:50.348014   70293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem
	I1025 18:01:50.348121   70293 main.go:141] libmachine: Decoding PEM data...
	I1025 18:01:50.348158   70293 main.go:141] libmachine: Parsing certificate...
	I1025 18:01:50.348270   70293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem
	I1025 18:01:50.348334   70293 main.go:141] libmachine: Decoding PEM data...
	I1025 18:01:50.348354   70293 main.go:141] libmachine: Parsing certificate...
	I1025 18:01:50.369560   70293 cli_runner.go:164] Run: docker network inspect multinode-971000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 18:01:50.422778   70293 cli_runner.go:211] docker network inspect multinode-971000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 18:01:50.422880   70293 network_create.go:281] running [docker network inspect multinode-971000] to gather additional debugging logs...
	I1025 18:01:50.422896   70293 cli_runner.go:164] Run: docker network inspect multinode-971000
	W1025 18:01:50.474299   70293 cli_runner.go:211] docker network inspect multinode-971000 returned with exit code 1
	I1025 18:01:50.474326   70293 network_create.go:284] error running [docker network inspect multinode-971000]: docker network inspect multinode-971000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-971000 not found
	I1025 18:01:50.474339   70293 network_create.go:286] output of [docker network inspect multinode-971000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-971000 not found
	
	** /stderr **
	I1025 18:01:50.474464   70293 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 18:01:50.526790   70293 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1025 18:01:50.527186   70293 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00229d180}
	I1025 18:01:50.527204   70293 network_create.go:124] attempt to create docker network multinode-971000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1025 18:01:50.527272   70293 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-971000 multinode-971000
	I1025 18:01:50.614614   70293 network_create.go:108] docker network multinode-971000 192.168.58.0/24 created
	I1025 18:01:50.614649   70293 kic.go:118] calculated static IP "192.168.58.2" for the "multinode-971000" container
	I1025 18:01:50.614751   70293 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 18:01:50.665569   70293 cli_runner.go:164] Run: docker volume create multinode-971000 --label name.minikube.sigs.k8s.io=multinode-971000 --label created_by.minikube.sigs.k8s.io=true
	I1025 18:01:50.717387   70293 oci.go:103] Successfully created a docker volume multinode-971000
	I1025 18:01:50.717497   70293 cli_runner.go:164] Run: docker run --rm --name multinode-971000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-971000 --entrypoint /usr/bin/test -v multinode-971000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 18:01:51.131295   70293 oci.go:107] Successfully prepared a docker volume multinode-971000
	I1025 18:01:51.131332   70293 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:01:51.131343   70293 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 18:01:51.131423   70293 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-971000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 18:01:54.005955   70293 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-971000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (2.87440013s)
	I1025 18:01:54.005981   70293 kic.go:200] duration metric: took 2.874548 seconds to extract preloaded images to volume
	I1025 18:01:54.006096   70293 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 18:01:54.108416   70293 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-971000 --name multinode-971000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-971000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-971000 --network multinode-971000 --ip 192.168.58.2 --volume multinode-971000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 18:01:54.394985   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Running}}
	I1025 18:01:54.453726   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:01:54.541781   70293 cli_runner.go:164] Run: docker exec multinode-971000 stat /var/lib/dpkg/alternatives/iptables
	I1025 18:01:54.656498   70293 oci.go:144] the created container "multinode-971000" has a running status.
	I1025 18:01:54.656541   70293 kic.go:222] Creating ssh key for kic: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa...
	I1025 18:01:54.881053   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 18:01:54.881112   70293 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 18:01:54.951847   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:01:55.009155   70293 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 18:01:55.009180   70293 kic_runner.go:114] Args: [docker exec --privileged multinode-971000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 18:01:55.107537   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:01:55.159059   70293 machine.go:88] provisioning docker machine ...
	I1025 18:01:55.159102   70293 ubuntu.go:169] provisioning hostname "multinode-971000"
	I1025 18:01:55.159199   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:55.210385   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:01:55.210713   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57079 <nil> <nil>}
	I1025 18:01:55.210727   70293 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-971000 && echo "multinode-971000" | sudo tee /etc/hostname
	I1025 18:01:55.343625   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-971000
	
	I1025 18:01:55.343723   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:55.395137   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:01:55.395430   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57079 <nil> <nil>}
	I1025 18:01:55.395444   70293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-971000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-971000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-971000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:01:55.518871   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:01:55.518894   70293 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 18:01:55.518923   70293 ubuntu.go:177] setting up certificates
	I1025 18:01:55.518934   70293 provision.go:83] configureAuth start
	I1025 18:01:55.519014   70293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000
	I1025 18:01:55.569972   70293 provision.go:138] copyHostCerts
	I1025 18:01:55.570012   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:01:55.570064   70293 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 18:01:55.570071   70293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:01:55.570140   70293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 18:01:55.570366   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:01:55.570392   70293 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 18:01:55.570396   70293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:01:55.570467   70293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 18:01:55.570645   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:01:55.570680   70293 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 18:01:55.570685   70293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:01:55.570749   70293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 18:01:55.570908   70293 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.multinode-971000 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-971000]
	I1025 18:01:55.688749   70293 provision.go:172] copyRemoteCerts
	I1025 18:01:55.688802   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:01:55.688860   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:55.740262   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:01:55.829152   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 18:01:55.829232   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 18:01:55.851809   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 18:01:55.851877   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 18:01:55.874394   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 18:01:55.874466   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 18:01:55.897116   70293 provision.go:86] duration metric: configureAuth took 378.15699ms
	I1025 18:01:55.897134   70293 ubuntu.go:193] setting minikube options for container-runtime
	I1025 18:01:55.897271   70293 config.go:182] Loaded profile config "multinode-971000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:01:55.897330   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:55.950368   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:01:55.950681   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57079 <nil> <nil>}
	I1025 18:01:55.950695   70293 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:01:56.073112   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 18:01:56.073127   70293 ubuntu.go:71] root file system type: overlay
	I1025 18:01:56.073209   70293 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:01:56.073290   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:56.124374   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:01:56.124685   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57079 <nil> <nil>}
	I1025 18:01:56.124742   70293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:01:56.257703   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:01:56.257827   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:56.309694   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:01:56.309997   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57079 <nil> <nil>}
	I1025 18:01:56.310012   70293 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:01:56.904475   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:01:56.254499664 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 18:01:56.904507   70293 machine.go:91] provisioned docker machine in 1.745372989s
	I1025 18:01:56.904514   70293 client.go:171] LocalClient.Create took 6.556487105s
	I1025 18:01:56.904535   70293 start.go:167] duration metric: libmachine.API.Create for "multinode-971000" took 6.556612493s
	I1025 18:01:56.904544   70293 start.go:300] post-start starting for "multinode-971000" (driver="docker")
	I1025 18:01:56.904552   70293 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:01:56.904625   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:01:56.904678   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:56.957840   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:01:57.049566   70293 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:01:57.053697   70293 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1025 18:01:57.053706   70293 command_runner.go:130] > NAME="Ubuntu"
	I1025 18:01:57.053711   70293 command_runner.go:130] > VERSION_ID="22.04"
	I1025 18:01:57.053717   70293 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1025 18:01:57.053730   70293 command_runner.go:130] > VERSION_CODENAME=jammy
	I1025 18:01:57.053735   70293 command_runner.go:130] > ID=ubuntu
	I1025 18:01:57.053738   70293 command_runner.go:130] > ID_LIKE=debian
	I1025 18:01:57.053743   70293 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1025 18:01:57.053749   70293 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1025 18:01:57.053756   70293 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1025 18:01:57.053762   70293 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1025 18:01:57.053766   70293 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1025 18:01:57.053805   70293 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 18:01:57.053833   70293 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 18:01:57.053840   70293 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 18:01:57.053845   70293 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 18:01:57.053856   70293 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 18:01:57.053958   70293 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 18:01:57.054128   70293 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 18:01:57.054135   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /etc/ssl/certs/652922.pem
	I1025 18:01:57.054308   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:01:57.063286   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:01:57.085695   70293 start.go:303] post-start completed in 181.137729ms
	I1025 18:01:57.086222   70293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000
	I1025 18:01:57.137424   70293 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/config.json ...
	I1025 18:01:57.137882   70293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 18:01:57.137948   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:57.188976   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:01:57.274968   70293 command_runner.go:130] > 6%!
	(MISSING)I1025 18:01:57.275055   70293 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 18:01:57.280148   70293 command_runner.go:130] > 92G
	I1025 18:01:57.280474   70293 start.go:128] duration metric: createHost completed in 6.959376519s
	I1025 18:01:57.280492   70293 start.go:83] releasing machines lock for "multinode-971000", held for 6.959510468s
	I1025 18:01:57.280584   70293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000
	I1025 18:01:57.331602   70293 ssh_runner.go:195] Run: cat /version.json
	I1025 18:01:57.331623   70293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:01:57.331675   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:57.331687   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:57.389933   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:01:57.390145   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:01:57.583210   70293 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1025 18:01:57.585545   70293 command_runner.go:130] > {"iso_version": "v1.31.0-1697471113-17434", "kicbase_version": "v0.0.40-1698055645-17423", "minikube_version": "v1.31.2", "commit": "585245745aba695f9444ad633713942a6eacd882"}
	I1025 18:01:57.585678   70293 ssh_runner.go:195] Run: systemctl --version
	I1025 18:01:57.590825   70293 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1025 18:01:57.590853   70293 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1025 18:01:57.590922   70293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 18:01:57.596062   70293 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1025 18:01:57.596081   70293 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1025 18:01:57.596086   70293 command_runner.go:130] > Device: a4h/164d	Inode: 1048758     Links: 1
	I1025 18:01:57.596091   70293 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 18:01:57.596096   70293 command_runner.go:130] > Access: 2023-10-26 00:39:30.354217175 +0000
	I1025 18:01:57.596100   70293 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1025 18:01:57.596105   70293 command_runner.go:130] > Change: 2023-10-26 00:39:14.867105012 +0000
	I1025 18:01:57.596110   70293 command_runner.go:130] >  Birth: 2023-10-26 00:39:14.867105012 +0000
	I1025 18:01:57.596398   70293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 18:01:57.620904   70293 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 18:01:57.620968   70293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 18:01:57.646596   70293 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1025 18:01:57.646627   70293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1025 18:01:57.646635   70293 start.go:472] detecting cgroup driver to use...
	I1025 18:01:57.646649   70293 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:01:57.646764   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:01:57.662197   70293 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1025 18:01:57.663133   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 18:01:57.673421   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:01:57.683772   70293 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:01:57.683835   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:01:57.694297   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:01:57.704507   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:01:57.714905   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:01:57.725371   70293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:01:57.735204   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:01:57.745792   70293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:01:57.754192   70293 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1025 18:01:57.754815   70293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:01:57.763765   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:01:57.822145   70293 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:01:57.899919   70293 start.go:472] detecting cgroup driver to use...
	I1025 18:01:57.899939   70293 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:01:57.900011   70293 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:01:57.916708   70293 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1025 18:01:57.916813   70293 command_runner.go:130] > [Unit]
	I1025 18:01:57.916822   70293 command_runner.go:130] > Description=Docker Application Container Engine
	I1025 18:01:57.916827   70293 command_runner.go:130] > Documentation=https://docs.docker.com
	I1025 18:01:57.916832   70293 command_runner.go:130] > BindsTo=containerd.service
	I1025 18:01:57.916837   70293 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1025 18:01:57.916841   70293 command_runner.go:130] > Wants=network-online.target
	I1025 18:01:57.916847   70293 command_runner.go:130] > Requires=docker.socket
	I1025 18:01:57.916851   70293 command_runner.go:130] > StartLimitBurst=3
	I1025 18:01:57.916855   70293 command_runner.go:130] > StartLimitIntervalSec=60
	I1025 18:01:57.916858   70293 command_runner.go:130] > [Service]
	I1025 18:01:57.916862   70293 command_runner.go:130] > Type=notify
	I1025 18:01:57.916867   70293 command_runner.go:130] > Restart=on-failure
	I1025 18:01:57.916876   70293 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1025 18:01:57.916889   70293 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1025 18:01:57.916895   70293 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1025 18:01:57.916901   70293 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1025 18:01:57.916908   70293 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1025 18:01:57.916924   70293 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1025 18:01:57.916934   70293 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1025 18:01:57.916944   70293 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1025 18:01:57.916949   70293 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1025 18:01:57.916953   70293 command_runner.go:130] > ExecStart=
	I1025 18:01:57.916964   70293 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1025 18:01:57.916972   70293 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1025 18:01:57.916977   70293 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1025 18:01:57.916983   70293 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1025 18:01:57.916986   70293 command_runner.go:130] > LimitNOFILE=infinity
	I1025 18:01:57.916990   70293 command_runner.go:130] > LimitNPROC=infinity
	I1025 18:01:57.916993   70293 command_runner.go:130] > LimitCORE=infinity
	I1025 18:01:57.916998   70293 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1025 18:01:57.917004   70293 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1025 18:01:57.917007   70293 command_runner.go:130] > TasksMax=infinity
	I1025 18:01:57.917011   70293 command_runner.go:130] > TimeoutStartSec=0
	I1025 18:01:57.917017   70293 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1025 18:01:57.917021   70293 command_runner.go:130] > Delegate=yes
	I1025 18:01:57.917028   70293 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1025 18:01:57.917032   70293 command_runner.go:130] > KillMode=process
	I1025 18:01:57.917048   70293 command_runner.go:130] > [Install]
	I1025 18:01:57.917057   70293 command_runner.go:130] > WantedBy=multi-user.target
	I1025 18:01:57.917748   70293 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 18:01:57.917808   70293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:01:57.930232   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:01:57.947889   70293 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 18:01:57.949202   70293 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:01:57.954285   70293 command_runner.go:130] > /usr/bin/cri-dockerd
	I1025 18:01:57.954413   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:01:57.965521   70293 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:01:57.984228   70293 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:01:58.071964   70293 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:01:58.170072   70293 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:01:58.170189   70293 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:01:58.189527   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:01:58.288374   70293 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:01:58.539216   70293 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:01:58.603408   70293 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I1025 18:01:58.603476   70293 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 18:01:58.670052   70293 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:01:58.725701   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:01:58.787700   70293 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 18:01:58.812973   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:01:58.881299   70293 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 18:01:58.963610   70293 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 18:01:58.963712   70293 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 18:01:58.969075   70293 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1025 18:01:58.969103   70293 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1025 18:01:58.969113   70293 command_runner.go:130] > Device: ach/172d	Inode: 267         Links: 1
	I1025 18:01:58.969131   70293 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1025 18:01:58.969140   70293 command_runner.go:130] > Access: 2023-10-26 01:01:58.891782085 +0000
	I1025 18:01:58.969161   70293 command_runner.go:130] > Modify: 2023-10-26 01:01:58.891782085 +0000
	I1025 18:01:58.969169   70293 command_runner.go:130] > Change: 2023-10-26 01:01:58.902782086 +0000
	I1025 18:01:58.969174   70293 command_runner.go:130] >  Birth: 2023-10-26 01:01:58.891782085 +0000
	I1025 18:01:58.969204   70293 start.go:540] Will wait 60s for crictl version
	I1025 18:01:58.969262   70293 ssh_runner.go:195] Run: which crictl
	I1025 18:01:58.973720   70293 command_runner.go:130] > /usr/bin/crictl
	I1025 18:01:58.973798   70293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 18:01:59.017439   70293 command_runner.go:130] > Version:  0.1.0
	I1025 18:01:59.017452   70293 command_runner.go:130] > RuntimeName:  docker
	I1025 18:01:59.017456   70293 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1025 18:01:59.017461   70293 command_runner.go:130] > RuntimeApiVersion:  v1
	I1025 18:01:59.019509   70293 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 18:01:59.019591   70293 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:01:59.044305   70293 command_runner.go:130] > 24.0.6
	I1025 18:01:59.045447   70293 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:01:59.070693   70293 command_runner.go:130] > 24.0.6
	I1025 18:01:59.117235   70293 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 18:01:59.117414   70293 cli_runner.go:164] Run: docker exec -t multinode-971000 dig +short host.docker.internal
	I1025 18:01:59.237590   70293 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 18:01:59.237698   70293 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 18:01:59.242851   70293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:01:59.254373   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:01:59.305845   70293 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:01:59.305914   70293 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:01:59.326533   70293 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1025 18:01:59.326546   70293 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1025 18:01:59.326550   70293 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1025 18:01:59.326556   70293 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1025 18:01:59.326560   70293 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1025 18:01:59.326564   70293 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1025 18:01:59.326568   70293 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1025 18:01:59.326575   70293 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:01:59.327562   70293 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:01:59.327587   70293 docker.go:623] Images already preloaded, skipping extraction
	I1025 18:01:59.327679   70293 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:01:59.347016   70293 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1025 18:01:59.347030   70293 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1025 18:01:59.347041   70293 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1025 18:01:59.347048   70293 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1025 18:01:59.347054   70293 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1025 18:01:59.347061   70293 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1025 18:01:59.347067   70293 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1025 18:01:59.347081   70293 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:01:59.348141   70293 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:01:59.348163   70293 cache_images.go:84] Images are preloaded, skipping loading
	I1025 18:01:59.348243   70293 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:01:59.399449   70293 command_runner.go:130] > cgroupfs
	I1025 18:01:59.400592   70293 cni.go:84] Creating CNI manager for ""
	I1025 18:01:59.400605   70293 cni.go:136] 1 nodes found, recommending kindnet
	I1025 18:01:59.400623   70293 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 18:01:59.400638   70293 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-971000 NodeName:multinode-971000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 18:01:59.400755   70293 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-971000"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:01:59.400817   70293 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-971000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 18:01:59.400876   70293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 18:01:59.409970   70293 command_runner.go:130] > kubeadm
	I1025 18:01:59.409979   70293 command_runner.go:130] > kubectl
	I1025 18:01:59.409982   70293 command_runner.go:130] > kubelet
	I1025 18:01:59.410647   70293 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:01:59.410699   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 18:01:59.419780   70293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1025 18:01:59.436651   70293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:01:59.453548   70293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1025 18:01:59.470877   70293 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1025 18:01:59.475384   70293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:01:59.486947   70293 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000 for IP: 192.168.58.2
	I1025 18:01:59.486966   70293 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.487154   70293 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 18:01:59.487223   70293 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 18:01:59.487272   70293 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key
	I1025 18:01:59.487287   70293 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt with IP's: []
	I1025 18:01:59.600039   70293 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt ...
	I1025 18:01:59.600051   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt: {Name:mk64559d4fe4512acb57c5db6c94d26b48ee9a4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.600343   70293 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key ...
	I1025 18:01:59.600350   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key: {Name:mka03e9a439d934e99e8b908d2bbdfdb23cd0f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.600548   70293 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key.cee25041
	I1025 18:01:59.600562   70293 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 18:01:59.707555   70293 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt.cee25041 ...
	I1025 18:01:59.707565   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt.cee25041: {Name:mke095aa049bba03566453c031a11ef4f396369d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.707812   70293 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key.cee25041 ...
	I1025 18:01:59.707819   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key.cee25041: {Name:mkf0f5901f2f09a7b9f8ee0fb2794acddc7a12d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.708013   70293 certs.go:337] copying /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt
	I1025 18:01:59.708178   70293 certs.go:341] copying /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key
	I1025 18:01:59.708335   70293 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.key
	I1025 18:01:59.708348   70293 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.crt with IP's: []
	I1025 18:01:59.801029   70293 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.crt ...
	I1025 18:01:59.801041   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.crt: {Name:mk28fa6a995bfac0944ebe68223bd61e361107f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.801296   70293 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.key ...
	I1025 18:01:59.801309   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.key: {Name:mk500d9606cd847ad8de5d70ff22cad1de5293f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:01:59.801493   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1025 18:01:59.801518   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1025 18:01:59.801535   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1025 18:01:59.801560   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1025 18:01:59.801577   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 18:01:59.801594   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 18:01:59.801609   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 18:01:59.801625   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 18:01:59.801716   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 18:01:59.801762   70293 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 18:01:59.801775   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 18:01:59.801803   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 18:01:59.801830   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:01:59.801863   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 18:01:59.801928   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:01:59.801963   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /usr/share/ca-certificates/652922.pem
	I1025 18:01:59.801982   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:01:59.802000   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem -> /usr/share/ca-certificates/65292.pem
	I1025 18:01:59.802521   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 18:01:59.826095   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 18:01:59.848783   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 18:01:59.872111   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 18:01:59.895606   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:01:59.918389   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 18:01:59.941164   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:01:59.963937   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:01:59.986871   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 18:02:00.010582   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:02:00.033652   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 18:02:00.056132   70293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 18:02:00.073458   70293 ssh_runner.go:195] Run: openssl version
	I1025 18:02:00.079135   70293 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1025 18:02:00.079418   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:02:00.089785   70293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:00.094373   70293 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:00.094399   70293 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:00.094440   70293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:00.101107   70293 command_runner.go:130] > b5213941
	I1025 18:02:00.101461   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:02:00.111662   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 18:02:00.121777   70293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 18:02:00.126219   70293 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:02:00.126242   70293 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:02:00.126289   70293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 18:02:00.133298   70293 command_runner.go:130] > 51391683
	I1025 18:02:00.133526   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 18:02:00.143703   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 18:02:00.153994   70293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 18:02:00.158598   70293 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:02:00.158620   70293 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:02:00.158671   70293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 18:02:00.165457   70293 command_runner.go:130] > 3ec20f2e
	I1025 18:02:00.165645   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:02:00.175603   70293 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 18:02:00.180129   70293 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 18:02:00.180146   70293 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 18:02:00.180187   70293 kubeadm.go:404] StartCluster: {Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:02:00.180288   70293 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:02:00.200864   70293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 18:02:00.209861   70293 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1025 18:02:00.209873   70293 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1025 18:02:00.209879   70293 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1025 18:02:00.210643   70293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:02:00.219814   70293 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 18:02:00.219869   70293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:02:00.229175   70293 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1025 18:02:00.229193   70293 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1025 18:02:00.229199   70293 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1025 18:02:00.229208   70293 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:02:00.229223   70293 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:02:00.229248   70293 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 18:02:00.271409   70293 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1025 18:02:00.271425   70293 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1025 18:02:00.271470   70293 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 18:02:00.271483   70293 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 18:02:00.393950   70293 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:02:00.393996   70293 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:02:00.394082   70293 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:02:00.394090   70293 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:02:00.394201   70293 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:02:00.394215   70293 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:02:00.675530   70293 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:02:00.675549   70293 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:02:00.717618   70293 out.go:204]   - Generating certificates and keys ...
	I1025 18:02:00.717677   70293 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1025 18:02:00.717690   70293 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 18:02:00.717787   70293 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1025 18:02:00.717798   70293 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 18:02:01.017813   70293 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 18:02:01.017828   70293 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 18:02:01.216080   70293 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 18:02:01.216120   70293 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1025 18:02:01.361073   70293 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 18:02:01.361083   70293 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1025 18:02:01.497350   70293 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 18:02:01.497407   70293 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1025 18:02:01.587903   70293 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 18:02:01.587918   70293 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1025 18:02:01.588033   70293 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-971000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1025 18:02:01.588043   70293 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-971000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1025 18:02:01.831660   70293 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 18:02:01.831684   70293 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1025 18:02:01.831795   70293 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-971000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1025 18:02:01.831803   70293 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-971000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1025 18:02:02.187274   70293 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 18:02:02.187290   70293 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 18:02:02.327439   70293 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 18:02:02.327452   70293 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 18:02:02.556543   70293 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 18:02:02.556568   70293 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1025 18:02:02.556614   70293 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:02:02.556639   70293 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:02:02.675830   70293 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:02:02.675840   70293 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:02:02.770986   70293 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:02:02.770997   70293 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:02:02.975096   70293 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:02:02.975110   70293 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:02:03.129244   70293 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:02:03.129263   70293 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:02:03.129734   70293 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:02:03.129747   70293 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:02:03.132943   70293 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:02:03.132958   70293 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:02:03.154525   70293 out.go:204]   - Booting up control plane ...
	I1025 18:02:03.154607   70293 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:02:03.154612   70293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:02:03.154683   70293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:02:03.154692   70293 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:02:03.154771   70293 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:02:03.154775   70293 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:02:03.154866   70293 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:02:03.154874   70293 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:02:03.154974   70293 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:02:03.154989   70293 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:02:03.155046   70293 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 18:02:03.155052   70293 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1025 18:02:03.220245   70293 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:02:03.220261   70293 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:02:08.223537   70293 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002912 seconds
	I1025 18:02:08.223563   70293 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002912 seconds
	I1025 18:02:08.223741   70293 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 18:02:08.223756   70293 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 18:02:08.234134   70293 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 18:02:08.234149   70293 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 18:02:08.751449   70293 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 18:02:08.751467   70293 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1025 18:02:08.751629   70293 kubeadm.go:322] [mark-control-plane] Marking the node multinode-971000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 18:02:08.751648   70293 command_runner.go:130] > [mark-control-plane] Marking the node multinode-971000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 18:02:09.259722   70293 kubeadm.go:322] [bootstrap-token] Using token: g4l4ie.shzm0oxmox6k5n03
	I1025 18:02:09.259733   70293 command_runner.go:130] > [bootstrap-token] Using token: g4l4ie.shzm0oxmox6k5n03
	I1025 18:02:09.299274   70293 out.go:204]   - Configuring RBAC rules ...
	I1025 18:02:09.299385   70293 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 18:02:09.299396   70293 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 18:02:09.341578   70293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 18:02:09.341584   70293 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 18:02:09.347980   70293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 18:02:09.347996   70293 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 18:02:09.352873   70293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 18:02:09.352890   70293 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 18:02:09.356627   70293 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 18:02:09.356635   70293 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 18:02:09.360157   70293 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 18:02:09.360176   70293 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 18:02:09.369969   70293 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 18:02:09.369981   70293 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 18:02:09.550694   70293 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1025 18:02:09.550711   70293 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1025 18:02:09.748835   70293 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1025 18:02:09.748877   70293 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1025 18:02:09.750071   70293 kubeadm.go:322] 
	I1025 18:02:09.750163   70293 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1025 18:02:09.750213   70293 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1025 18:02:09.750229   70293 kubeadm.go:322] 
	I1025 18:02:09.750317   70293 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1025 18:02:09.750328   70293 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1025 18:02:09.750334   70293 kubeadm.go:322] 
	I1025 18:02:09.750363   70293 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1025 18:02:09.750370   70293 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1025 18:02:09.750451   70293 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 18:02:09.750459   70293 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 18:02:09.750523   70293 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 18:02:09.750536   70293 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 18:02:09.750550   70293 kubeadm.go:322] 
	I1025 18:02:09.750670   70293 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1025 18:02:09.750681   70293 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1025 18:02:09.750688   70293 kubeadm.go:322] 
	I1025 18:02:09.750765   70293 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 18:02:09.750776   70293 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 18:02:09.750783   70293 kubeadm.go:322] 
	I1025 18:02:09.750852   70293 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1025 18:02:09.750870   70293 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1025 18:02:09.751017   70293 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 18:02:09.751037   70293 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 18:02:09.751143   70293 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 18:02:09.751157   70293 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 18:02:09.751174   70293 kubeadm.go:322] 
	I1025 18:02:09.751294   70293 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1025 18:02:09.751338   70293 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 18:02:09.751484   70293 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1025 18:02:09.751499   70293 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1025 18:02:09.751513   70293 kubeadm.go:322] 
	I1025 18:02:09.751668   70293 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g4l4ie.shzm0oxmox6k5n03 \
	I1025 18:02:09.751681   70293 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token g4l4ie.shzm0oxmox6k5n03 \
	I1025 18:02:09.751827   70293 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d \
	I1025 18:02:09.751840   70293 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d \
	I1025 18:02:09.751867   70293 kubeadm.go:322] 	--control-plane 
	I1025 18:02:09.751873   70293 command_runner.go:130] > 	--control-plane 
	I1025 18:02:09.751883   70293 kubeadm.go:322] 
	I1025 18:02:09.752001   70293 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1025 18:02:09.752012   70293 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1025 18:02:09.752028   70293 kubeadm.go:322] 
	I1025 18:02:09.752229   70293 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g4l4ie.shzm0oxmox6k5n03 \
	I1025 18:02:09.752261   70293 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token g4l4ie.shzm0oxmox6k5n03 \
	I1025 18:02:09.752425   70293 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d 
	I1025 18:02:09.752430   70293 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d 
	I1025 18:02:09.754745   70293 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 18:02:09.754781   70293 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 18:02:09.754970   70293 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:02:09.754971   70293 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:02:09.754990   70293 cni.go:84] Creating CNI manager for ""
	I1025 18:02:09.755016   70293 cni.go:136] 1 nodes found, recommending kindnet
	I1025 18:02:09.793082   70293 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1025 18:02:09.835717   70293 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 18:02:09.843239   70293 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1025 18:02:09.843270   70293 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1025 18:02:09.843281   70293 command_runner.go:130] > Device: a4h/164d	Inode: 1049408     Links: 1
	I1025 18:02:09.843312   70293 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 18:02:09.843331   70293 command_runner.go:130] > Access: 2023-10-26 00:39:30.623217190 +0000
	I1025 18:02:09.843346   70293 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1025 18:02:09.843360   70293 command_runner.go:130] > Change: 2023-10-26 00:39:15.549105052 +0000
	I1025 18:02:09.843369   70293 command_runner.go:130] >  Birth: 2023-10-26 00:39:15.509105049 +0000
	I1025 18:02:09.843491   70293 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1025 18:02:09.843503   70293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1025 18:02:09.869816   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 18:02:10.474000   70293 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1025 18:02:10.478575   70293 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1025 18:02:10.485609   70293 command_runner.go:130] > serviceaccount/kindnet created
	I1025 18:02:10.492977   70293 command_runner.go:130] > daemonset.apps/kindnet created
	I1025 18:02:10.496737   70293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 18:02:10.496821   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc minikube.k8s.io/name=multinode-971000 minikube.k8s.io/updated_at=2023_10_25T18_02_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:10.496822   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:10.505919   70293 command_runner.go:130] > -16
	I1025 18:02:10.505955   70293 ops.go:34] apiserver oom_adj: -16
	I1025 18:02:10.576467   70293 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1025 18:02:10.576604   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:10.587651   70293 command_runner.go:130] > node/multinode-971000 labeled
	I1025 18:02:10.689253   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:10.689330   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:10.754773   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:11.255168   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:11.320163   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:11.755137   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:11.823194   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:12.255883   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:12.320450   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:12.755439   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:12.822236   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:13.255271   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:13.325545   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:13.755304   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:13.821854   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:14.255642   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:14.321994   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:14.755240   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:14.821743   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:15.255904   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:15.320078   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:15.757116   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:15.827033   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:16.256931   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:16.325906   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:16.755940   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:16.824699   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:17.257273   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:17.321523   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:17.755962   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:17.821408   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:18.255527   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:18.321416   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:18.756654   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:18.825537   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:19.256410   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:19.320937   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:19.755505   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:19.823916   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:20.257172   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:20.325270   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:20.757434   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:20.825351   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:21.255506   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:21.344623   70293 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1025 18:02:21.755266   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:02:21.822277   70293 command_runner.go:130] > NAME      SECRETS   AGE
	I1025 18:02:21.822290   70293 command_runner.go:130] > default   0         0s
	I1025 18:02:21.822301   70293 kubeadm.go:1081] duration metric: took 11.325213593s to wait for elevateKubeSystemPrivileges.
	I1025 18:02:21.822317   70293 kubeadm.go:406] StartCluster complete in 21.641485667s
	I1025 18:02:21.822335   70293 settings.go:142] acquiring lock: {Name:mkca0a8fe84aa865309571104a1d51551b90d38c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:02:21.822418   70293 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:02:21.822969   70293 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/kubeconfig: {Name:mka2fd80159d21a18312620daab0f942465327a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:02:21.823254   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 18:02:21.823272   70293 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 18:02:21.823317   70293 addons.go:69] Setting storage-provisioner=true in profile "multinode-971000"
	I1025 18:02:21.823329   70293 addons.go:69] Setting default-storageclass=true in profile "multinode-971000"
	I1025 18:02:21.823333   70293 addons.go:231] Setting addon storage-provisioner=true in "multinode-971000"
	I1025 18:02:21.823354   70293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-971000"
	I1025 18:02:21.823378   70293 config.go:182] Loaded profile config "multinode-971000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:02:21.823381   70293 host.go:66] Checking if "multinode-971000" exists ...
	I1025 18:02:21.823633   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:02:21.823651   70293 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:02:21.823783   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:02:21.824503   70293 kapi.go:59] client config for multinode-971000: &rest.Config{Host:"https://127.0.0.1:57083", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:02:21.828119   70293 cert_rotation.go:137] Starting client certificate rotation controller
	I1025 18:02:21.828412   70293 round_trippers.go:463] GET https://127.0.0.1:57083/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 18:02:21.828423   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:21.828431   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:21.828439   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:21.839605   70293 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1025 18:02:21.839619   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:21.839625   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:21.839644   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:21.839648   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:21.839674   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:21.839679   70293 round_trippers.go:580]     Content-Length: 291
	I1025 18:02:21.839683   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:21 GMT
	I1025 18:02:21.839688   70293 round_trippers.go:580]     Audit-Id: 0ba7391c-69af-48a7-8241-1bf6da20c3e7
	I1025 18:02:21.839758   70293 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"929058e7-d591-423d-8b82-e048f4d0d834","resourceVersion":"268","creationTimestamp":"2023-10-26T01:02:09Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1025 18:02:21.840272   70293 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"929058e7-d591-423d-8b82-e048f4d0d834","resourceVersion":"268","creationTimestamp":"2023-10-26T01:02:09Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1025 18:02:21.840308   70293 round_trippers.go:463] PUT https://127.0.0.1:57083/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 18:02:21.840313   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:21.840319   70293 round_trippers.go:473]     Content-Type: application/json
	I1025 18:02:21.840327   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:21.840333   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:21.846966   70293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1025 18:02:21.847003   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:21.847015   70293 round_trippers.go:580]     Audit-Id: 35a35a08-c3f1-4639-8d3a-053789656b40
	I1025 18:02:21.847023   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:21.847028   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:21.847042   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:21.847063   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:21.847102   70293 round_trippers.go:580]     Content-Length: 291
	I1025 18:02:21.847110   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:21 GMT
	I1025 18:02:21.847127   70293 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"929058e7-d591-423d-8b82-e048f4d0d834","resourceVersion":"335","creationTimestamp":"2023-10-26T01:02:09Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1025 18:02:21.847249   70293 round_trippers.go:463] GET https://127.0.0.1:57083/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 18:02:21.847255   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:21.847261   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:21.847267   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:21.852443   70293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 18:02:21.852461   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:21.852472   70293 round_trippers.go:580]     Content-Length: 291
	I1025 18:02:21.852481   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:21 GMT
	I1025 18:02:21.852488   70293 round_trippers.go:580]     Audit-Id: 3ce6c842-92e1-481a-999f-b0b84a1e30d0
	I1025 18:02:21.852496   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:21.852504   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:21.852515   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:21.852524   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:21.852549   70293 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"929058e7-d591-423d-8b82-e048f4d0d834","resourceVersion":"335","creationTimestamp":"2023-10-26T01:02:09Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1025 18:02:21.852631   70293 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-971000" context rescaled to 1 replicas
	I1025 18:02:21.852659   70293 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:02:21.874725   70293 out.go:177] * Verifying Kubernetes components...
	I1025 18:02:21.916448   70293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:02:21.945278   70293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:02:21.924235   70293 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:02:21.934213   70293 command_runner.go:130] > apiVersion: v1
	I1025 18:02:21.982263   70293 command_runner.go:130] > data:
	I1025 18:02:21.945522   70293 kapi.go:59] client config for multinode-971000: &rest.Config{Host:"https://127.0.0.1:57083", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:02:21.982298   70293 command_runner.go:130] >   Corefile: |
	I1025 18:02:21.982310   70293 command_runner.go:130] >     .:53 {
	I1025 18:02:21.982314   70293 command_runner.go:130] >         errors
	I1025 18:02:21.982341   70293 command_runner.go:130] >         health {
	I1025 18:02:21.982348   70293 command_runner.go:130] >            lameduck 5s
	I1025 18:02:21.982349   70293 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:02:21.982352   70293 command_runner.go:130] >         }
	I1025 18:02:21.982361   70293 command_runner.go:130] >         ready
	I1025 18:02:21.982362   70293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 18:02:21.982373   70293 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1025 18:02:21.982378   70293 command_runner.go:130] >            pods insecure
	I1025 18:02:21.982388   70293 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1025 18:02:21.982394   70293 command_runner.go:130] >            ttl 30
	I1025 18:02:21.982411   70293 command_runner.go:130] >         }
	I1025 18:02:21.982416   70293 command_runner.go:130] >         prometheus :9153
	I1025 18:02:21.982420   70293 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1025 18:02:21.982426   70293 command_runner.go:130] >            max_concurrent 1000
	I1025 18:02:21.982430   70293 command_runner.go:130] >         }
	I1025 18:02:21.982433   70293 command_runner.go:130] >         cache 30
	I1025 18:02:21.982437   70293 command_runner.go:130] >         loop
	I1025 18:02:21.982437   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:02:21.982440   70293 command_runner.go:130] >         reload
	I1025 18:02:21.982448   70293 command_runner.go:130] >         loadbalance
	I1025 18:02:21.982452   70293 command_runner.go:130] >     }
	I1025 18:02:21.982455   70293 command_runner.go:130] > kind: ConfigMap
	I1025 18:02:21.982464   70293 command_runner.go:130] > metadata:
	I1025 18:02:21.982472   70293 command_runner.go:130] >   creationTimestamp: "2023-10-26T01:02:09Z"
	I1025 18:02:21.982474   70293 addons.go:231] Setting addon default-storageclass=true in "multinode-971000"
	I1025 18:02:21.982477   70293 command_runner.go:130] >   name: coredns
	I1025 18:02:21.982483   70293 command_runner.go:130] >   namespace: kube-system
	I1025 18:02:21.982487   70293 command_runner.go:130] >   resourceVersion: "264"
	I1025 18:02:21.982491   70293 command_runner.go:130] >   uid: 2fc1cf57-eba4-447b-8e4e-de7a7b3ccd98
	I1025 18:02:21.982492   70293 host.go:66] Checking if "multinode-971000" exists ...
	I1025 18:02:21.982597   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:02:21.982668   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 18:02:21.983617   70293 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:02:22.058087   70293 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 18:02:22.058117   70293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 18:02:22.058251   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:02:22.059220   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:02:22.059478   70293 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:02:22.059953   70293 kapi.go:59] client config for multinode-971000: &rest.Config{Host:"https://127.0.0.1:57083", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:02:22.060418   70293 node_ready.go:35] waiting up to 6m0s for node "multinode-971000" to be "Ready" ...
	I1025 18:02:22.060506   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.060520   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.060537   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.060549   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.065996   70293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 18:02:22.066023   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.066031   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.066036   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.066041   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.066046   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.066061   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.066074   70293 round_trippers.go:580]     Audit-Id: 9233847b-9bc3-40d1-9b18-a1b5e43dd4f8
	I1025 18:02:22.066974   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.068992   70293 node_ready.go:49] node "multinode-971000" has status "Ready":"True"
	I1025 18:02:22.069011   70293 node_ready.go:38] duration metric: took 8.559125ms waiting for node "multinode-971000" to be "Ready" ...
	I1025 18:02:22.069023   70293 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 18:02:22.069095   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:22.069104   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.069116   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.069127   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.074058   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:22.074089   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.074100   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.074123   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.074138   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.074152   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.074162   70293 round_trippers.go:580]     Audit-Id: 16698d60-13d6-49af-b488-f6acefe8d8ba
	I1025 18:02:22.074197   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.074651   70293 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"343"},"items":[{"metadata":{"name":"etcd-multinode-971000","namespace":"kube-system","uid":"686f24fe-a02b-4a6b-8790-b0d2628424c1","resourceVersion":"302","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.mirror":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.seen":"2023-10-26T01:02:09.640585120Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30360 chars]
	I1025 18:02:22.077517   70293 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:22.077580   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/etcd-multinode-971000
	I1025 18:02:22.077586   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.077593   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.077600   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.081207   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:22.081225   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.081231   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.081236   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.081241   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.081248   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.081254   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.081259   70293 round_trippers.go:580]     Audit-Id: 58177826-596f-47d1-9387-3e5833198f4c
	I1025 18:02:22.081343   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-971000","namespace":"kube-system","uid":"686f24fe-a02b-4a6b-8790-b0d2628424c1","resourceVersion":"302","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.mirror":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.seen":"2023-10-26T01:02:09.640585120Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6076 chars]
	I1025 18:02:22.081629   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.081637   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.081643   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.081649   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.118335   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:02:22.137962   70293 round_trippers.go:574] Response Status: 200 OK in 56 milliseconds
	I1025 18:02:22.137983   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.137994   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.138003   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.138013   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.138024   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.138034   70293 round_trippers.go:580]     Audit-Id: 20de7bf3-1b40-4e26-8864-6a9d34e9d689
	I1025 18:02:22.138045   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.138442   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.138806   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/etcd-multinode-971000
	I1025 18:02:22.138817   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.138826   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.138834   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.142474   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:22.142494   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.142506   70293 round_trippers.go:580]     Audit-Id: a9762499-44f4-4eda-8d1e-3dafd7cf8472
	I1025 18:02:22.142518   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.142529   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.142537   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.142545   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.142552   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.142953   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-971000","namespace":"kube-system","uid":"686f24fe-a02b-4a6b-8790-b0d2628424c1","resourceVersion":"302","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.mirror":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.seen":"2023-10-26T01:02:09.640585120Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6076 chars]
	I1025 18:02:22.143320   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.143335   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.143349   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.143367   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.146811   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:22.146853   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.146882   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.146899   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.146916   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.146930   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.146939   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.146949   70293 round_trippers.go:580]     Audit-Id: 23028079-bb8d-4b54-82b8-11095a681461
	I1025 18:02:22.147083   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.333640   70293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:02:22.534982   70293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 18:02:22.647641   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/etcd-multinode-971000
	I1025 18:02:22.647678   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.647693   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.647705   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.653351   70293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 18:02:22.653368   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.653375   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.653380   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.653385   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.653389   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.653394   70293 round_trippers.go:580]     Audit-Id: bd37ee79-d0c1-41c9-bfd0-38cd1b6b32cc
	I1025 18:02:22.653398   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.653866   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-971000","namespace":"kube-system","uid":"686f24fe-a02b-4a6b-8790-b0d2628424c1","resourceVersion":"353","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.mirror":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.seen":"2023-10-26T01:02:09.640585120Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I1025 18:02:22.654417   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.654427   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.654435   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.654440   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.684546   70293 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I1025 18:02:22.684565   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.684573   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.684584   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.684590   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.684598   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.684604   70293 round_trippers.go:580]     Audit-Id: 187bf39d-c6e1-43e2-9460-aaf18d1d3cb5
	I1025 18:02:22.684611   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.684729   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.684994   70293 pod_ready.go:92] pod "etcd-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:22.685005   70293 pod_ready.go:81] duration metric: took 607.45433ms waiting for pod "etcd-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:22.685014   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:22.685060   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-971000
	I1025 18:02:22.685066   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.685074   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.685081   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.736328   70293 round_trippers.go:574] Response Status: 200 OK in 51 milliseconds
	I1025 18:02:22.736351   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.736361   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.736379   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.736423   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.736438   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.736467   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.736486   70293 round_trippers.go:580]     Audit-Id: 9b20a9d3-1ec2-4946-9388-77ea201ec014
	I1025 18:02:22.737511   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-971000","namespace":"kube-system","uid":"b4400411-c3b7-408c-b79f-a2e005efbef3","resourceVersion":"378","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"3673709ea844b9ea542719bd93b9f9af","kubernetes.io/config.mirror":"3673709ea844b9ea542719bd93b9f9af","kubernetes.io/config.seen":"2023-10-26T01:02:09.640588239Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8238 chars]
	I1025 18:02:22.738043   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.738057   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.738069   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.738080   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.744311   70293 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1025 18:02:22.744331   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.744350   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.744364   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.744369   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.744375   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.744384   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.744393   70293 round_trippers.go:580]     Audit-Id: 485be69f-3a99-4e9d-9ce4-772aec09365f
	I1025 18:02:22.744476   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.744808   70293 pod_ready.go:92] pod "kube-apiserver-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:22.744823   70293 pod_ready.go:81] duration metric: took 59.799765ms waiting for pod "kube-apiserver-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:22.744836   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:22.744890   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-971000
	I1025 18:02:22.744900   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.744909   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.744917   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.748766   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:22.748791   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.748803   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.748815   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.748825   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.748836   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.748844   70293 round_trippers.go:580]     Audit-Id: cb4d8def-ff6f-425a-b955-9c8331b59044
	I1025 18:02:22.748858   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.749045   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-971000","namespace":"kube-system","uid":"6347ae2f-f5d5-4533-8b15-4cb194fd7c75","resourceVersion":"301","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.mirror":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.seen":"2023-10-26T01:02:09.640589032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8075 chars]
	I1025 18:02:22.749502   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.749515   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.749525   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.749534   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.833561   70293 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I1025 18:02:22.833594   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.833609   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.833638   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.833658   70293 round_trippers.go:580]     Audit-Id: d9333d57-07a9-40ac-b5c1-0622417a631e
	I1025 18:02:22.833672   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.833690   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.833705   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.833864   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.834466   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-971000
	I1025 18:02:22.834484   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.834500   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.834520   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.840183   70293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 18:02:22.840202   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.840212   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.840220   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.840228   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.840248   70293 round_trippers.go:580]     Audit-Id: 6a5701f7-2719-43be-909b-cef485a2fdd7
	I1025 18:02:22.840260   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.840268   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.840450   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-971000","namespace":"kube-system","uid":"6347ae2f-f5d5-4533-8b15-4cb194fd7c75","resourceVersion":"301","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.mirror":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.seen":"2023-10-26T01:02:09.640589032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8075 chars]
	I1025 18:02:22.858949   70293 command_runner.go:130] > configmap/coredns replaced
	I1025 18:02:22.860611   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:22.860626   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:22.860648   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:22.860682   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:22.936768   70293 round_trippers.go:574] Response Status: 200 OK in 76 milliseconds
	I1025 18:02:22.936786   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:22.936795   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:22.936805   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:22.936815   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:22 GMT
	I1025 18:02:22.936829   70293 round_trippers.go:580]     Audit-Id: b46bfb8b-524a-4fde-a75a-0af1ee668f77
	I1025 18:02:22.936838   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:22.936845   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:22.937030   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:22.940009   70293 start.go:926] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1025 18:02:23.437620   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-971000
	I1025 18:02:23.437640   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.437677   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.437689   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.441964   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:23.441981   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.441988   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.441999   70293 round_trippers.go:580]     Audit-Id: 9b9f17c4-4214-4775-9060-83de68c33eba
	I1025 18:02:23.442010   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.442019   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.442029   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.442053   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.442558   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-971000","namespace":"kube-system","uid":"6347ae2f-f5d5-4533-8b15-4cb194fd7c75","resourceVersion":"392","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.mirror":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.seen":"2023-10-26T01:02:09.640589032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7813 chars]
	I1025 18:02:23.443113   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:23.443128   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.443138   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.443147   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.447195   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:23.447215   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.447231   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.447242   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.447250   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.447257   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.447275   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.447283   70293 round_trippers.go:580]     Audit-Id: d117b489-82a7-4f16-a14f-26586d5b09b5
	I1025 18:02:23.447393   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:23.447766   70293 pod_ready.go:92] pod "kube-controller-manager-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:23.447777   70293 pod_ready.go:81] duration metric: took 702.913178ms waiting for pod "kube-controller-manager-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:23.447789   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:23.460911   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-971000
	I1025 18:02:23.460925   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.460934   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.460940   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.464911   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:23.464928   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.464945   70293 round_trippers.go:580]     Audit-Id: 245710a3-e747-4801-90b6-50eda51b536d
	I1025 18:02:23.464958   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.464965   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.464970   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.464974   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.464979   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.465095   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-971000","namespace":"kube-system","uid":"411ae656-7e8b-4e4e-892e-9873855be79f","resourceVersion":"304","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"666bb44a088f2de4036212af9c22245b","kubernetes.io/config.mirror":"666bb44a088f2de4036212af9c22245b","kubernetes.io/config.seen":"2023-10-26T01:02:09.640589778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I1025 18:02:23.532582   70293 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1025 18:02:23.538964   70293 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1025 18:02:23.551188   70293 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1025 18:02:23.561204   70293 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1025 18:02:23.638180   70293 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1025 18:02:23.652958   70293 command_runner.go:130] > pod/storage-provisioner created
	I1025 18:02:23.657512   70293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.32380028s)
	I1025 18:02:23.657549   70293 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1025 18:02:23.657634   70293 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.122577117s)
	I1025 18:02:23.657761   70293 round_trippers.go:463] GET https://127.0.0.1:57083/apis/storage.k8s.io/v1/storageclasses
	I1025 18:02:23.657822   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.657838   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.657849   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.660662   70293 request.go:629] Waited for 195.192974ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:23.660712   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:23.660723   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.660734   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.660746   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.661354   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:23.661385   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.661401   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.661416   70293 round_trippers.go:580]     Content-Length: 1273
	I1025 18:02:23.661426   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.661433   70293 round_trippers.go:580]     Audit-Id: d4b92fc9-a46d-4b87-87a9-68969d6d0dd1
	I1025 18:02:23.661439   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.661444   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.661460   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.661893   70293 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"406"},"items":[{"metadata":{"name":"standard","uid":"f6c92594-2313-4046-87c3-7ae92ca50b39","resourceVersion":"394","creationTimestamp":"2023-10-26T01:02:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T01:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1025 18:02:23.662433   70293 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f6c92594-2313-4046-87c3-7ae92ca50b39","resourceVersion":"394","creationTimestamp":"2023-10-26T01:02:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T01:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 18:02:23.662488   70293 round_trippers.go:463] PUT https://127.0.0.1:57083/apis/storage.k8s.io/v1/storageclasses/standard
	I1025 18:02:23.662502   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.662514   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.662524   70293 round_trippers.go:473]     Content-Type: application/json
	I1025 18:02:23.662530   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.735715   70293 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I1025 18:02:23.735732   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.735738   70293 round_trippers.go:580]     Audit-Id: 441c0fd1-3d83-4767-b201-fc1c07681b7d
	I1025 18:02:23.735745   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.735752   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.735759   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.735766   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.735774   70293 round_trippers.go:580]     Content-Length: 1220
	I1025 18:02:23.735780   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.735844   70293 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f6c92594-2313-4046-87c3-7ae92ca50b39","resourceVersion":"394","creationTimestamp":"2023-10-26T01:02:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-26T01:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1025 18:02:23.736003   70293 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I1025 18:02:23.736018   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.736031   70293 round_trippers.go:580]     Audit-Id: 2e82dfc9-9eeb-47c6-8399-acb08dda3ca4
	I1025 18:02:23.736044   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.736052   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.736061   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.736071   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.798419   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.798393   70293 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 18:02:23.819389   70293 addons.go:502] enable addons completed in 1.996061365s: enabled=[storage-provisioner default-storageclass]
	I1025 18:02:23.798501   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:23.819798   70293 pod_ready.go:92] pod "kube-scheduler-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:23.819816   70293 pod_ready.go:81] duration metric: took 372.001138ms waiting for pod "kube-scheduler-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:23.819825   70293 pod_ready.go:38] duration metric: took 1.750736333s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 18:02:23.819844   70293 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:02:23.819926   70293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:02:23.846724   70293 command_runner.go:130] > 2278
	I1025 18:02:23.847756   70293 api_server.go:72] duration metric: took 1.995007964s to wait for apiserver process to appear ...
	I1025 18:02:23.847774   70293 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:02:23.847798   70293 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57083/healthz ...
	I1025 18:02:23.854010   70293 api_server.go:279] https://127.0.0.1:57083/healthz returned 200:
	ok
	I1025 18:02:23.854060   70293 round_trippers.go:463] GET https://127.0.0.1:57083/version
	I1025 18:02:23.854065   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.854074   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.854081   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.855922   70293 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1025 18:02:23.855933   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.855939   70293 round_trippers.go:580]     Content-Length: 264
	I1025 18:02:23.855944   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.855949   70293 round_trippers.go:580]     Audit-Id: 3dc32f43-800a-42aa-bc98-3657c550e5af
	I1025 18:02:23.855954   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.855963   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.855968   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.855972   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.855984   70293 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1025 18:02:23.856032   70293 api_server.go:141] control plane version: v1.28.3
	I1025 18:02:23.856040   70293 api_server.go:131] duration metric: took 8.259025ms to wait for apiserver health ...
	I1025 18:02:23.856045   70293 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 18:02:23.860697   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:23.860708   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:23.860715   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:23.860720   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:23.866071   70293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 18:02:23.866091   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:23.866101   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:23.866110   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:23.866118   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:23.866126   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:23 GMT
	I1025 18:02:23.866134   70293 round_trippers.go:580]     Audit-Id: 8a389f18-f2ce-4ee5-b960-38ea89021abe
	I1025 18:02:23.866142   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:23.867414   70293 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"409"},"items":[{"metadata":{"name":"coredns-5dd5756b68-cvn82","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b00548f2-a206-488a-9e2b-45f2e1066597","resourceVersion":"387","creationTimestamp":"2023-10-26T01:02:22Z","deletionTimestamp":"2023-10-26T01:02:52Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0dc1c1d5-d0f7-41f7-962e-a321b5fe
4f6e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 61875 chars]
	I1025 18:02:23.870039   70293 system_pods.go:59] 9 kube-system pods found
	I1025 18:02:23.870069   70293 system_pods.go:61] "coredns-5dd5756b68-cvn82" [b00548f2-a206-488a-9e2b-45f2e1066597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:23.870079   70293 system_pods.go:61] "coredns-5dd5756b68-vm8jw" [8747ca8b-8044-46a8-a5bd-700e0fb6ceb8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:23.870084   70293 system_pods.go:61] "etcd-multinode-971000" [686f24fe-a02b-4a6b-8790-b0d2628424c1] Running
	I1025 18:02:23.870089   70293 system_pods.go:61] "kindnet-5txks" [5b661079-5482-4abd-8420-09db800cc9b5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 18:02:23.870094   70293 system_pods.go:61] "kube-apiserver-multinode-971000" [b4400411-c3b7-408c-b79f-a2e005efbef3] Running
	I1025 18:02:23.870098   70293 system_pods.go:61] "kube-controller-manager-multinode-971000" [6347ae2f-f5d5-4533-8b15-4cb194fd7c75] Running
	I1025 18:02:23.870103   70293 system_pods.go:61] "kube-proxy-2dzxx" [449549c6-a5cd-4468-b565-55811bb44448] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 18:02:23.870107   70293 system_pods.go:61] "kube-scheduler-multinode-971000" [411ae656-7e8b-4e4e-892e-9873855be79f] Running
	I1025 18:02:23.870112   70293 system_pods.go:61] "storage-provisioner" [8a6d679a-a32e-4707-ad40-063155cf0cde] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 18:02:23.870117   70293 system_pods.go:74] duration metric: took 14.067401ms to wait for pod list to return data ...
	I1025 18:02:23.870124   70293 default_sa.go:34] waiting for default service account to be created ...
	I1025 18:02:24.060795   70293 request.go:629] Waited for 190.605602ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/namespaces/default/serviceaccounts
	I1025 18:02:24.060889   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/default/serviceaccounts
	I1025 18:02:24.060942   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:24.060950   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:24.060956   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:24.064908   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:24.064930   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:24.064942   70293 round_trippers.go:580]     Content-Length: 261
	I1025 18:02:24.064953   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:24 GMT
	I1025 18:02:24.064961   70293 round_trippers.go:580]     Audit-Id: aa7323a0-427f-4da0-acdf-55724076bd00
	I1025 18:02:24.064970   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:24.064980   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:24.064991   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:24.065005   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:24.065041   70293 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b0c2b808-2d7c-4263-802e-9812df34c54c","resourceVersion":"328","creationTimestamp":"2023-10-26T01:02:21Z"}}]}
	I1025 18:02:24.065232   70293 default_sa.go:45] found service account: "default"
	I1025 18:02:24.065248   70293 default_sa.go:55] duration metric: took 195.11068ms for default service account to be created ...
	I1025 18:02:24.065260   70293 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 18:02:24.260625   70293 request.go:629] Waited for 195.309388ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:24.260656   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:24.260662   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:24.260668   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:24.260674   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:24.264671   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:24.264683   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:24.264689   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:24 GMT
	I1025 18:02:24.264694   70293 round_trippers.go:580]     Audit-Id: b75fd28d-f95f-4dc6-a3cd-a23387c8cad6
	I1025 18:02:24.264699   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:24.264704   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:24.264708   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:24.264713   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:24.265928   70293 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"coredns-5dd5756b68-cvn82","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b00548f2-a206-488a-9e2b-45f2e1066597","resourceVersion":"387","creationTimestamp":"2023-10-26T01:02:22Z","deletionTimestamp":"2023-10-26T01:02:52Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0dc1c1d5-d0f7-41f7-962e-a321b5fe
4f6e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 61875 chars]
	I1025 18:02:24.267372   70293 system_pods.go:86] 9 kube-system pods found
	I1025 18:02:24.267386   70293 system_pods.go:89] "coredns-5dd5756b68-cvn82" [b00548f2-a206-488a-9e2b-45f2e1066597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:24.267392   70293 system_pods.go:89] "coredns-5dd5756b68-vm8jw" [8747ca8b-8044-46a8-a5bd-700e0fb6ceb8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:24.267398   70293 system_pods.go:89] "etcd-multinode-971000" [686f24fe-a02b-4a6b-8790-b0d2628424c1] Running
	I1025 18:02:24.267403   70293 system_pods.go:89] "kindnet-5txks" [5b661079-5482-4abd-8420-09db800cc9b5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 18:02:24.267407   70293 system_pods.go:89] "kube-apiserver-multinode-971000" [b4400411-c3b7-408c-b79f-a2e005efbef3] Running
	I1025 18:02:24.267428   70293 system_pods.go:89] "kube-controller-manager-multinode-971000" [6347ae2f-f5d5-4533-8b15-4cb194fd7c75] Running
	I1025 18:02:24.267440   70293 system_pods.go:89] "kube-proxy-2dzxx" [449549c6-a5cd-4468-b565-55811bb44448] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 18:02:24.267446   70293 system_pods.go:89] "kube-scheduler-multinode-971000" [411ae656-7e8b-4e4e-892e-9873855be79f] Running
	I1025 18:02:24.267451   70293 system_pods.go:89] "storage-provisioner" [8a6d679a-a32e-4707-ad40-063155cf0cde] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 18:02:24.267469   70293 retry.go:31] will retry after 286.432033ms: missing components: kube-dns, kube-proxy
	I1025 18:02:24.554155   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:24.554166   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:24.554173   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:24.554178   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:24.557769   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:24.557791   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:24.557810   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:24 GMT
	I1025 18:02:24.557822   70293 round_trippers.go:580]     Audit-Id: 5d07e617-d427-432f-bf20-ef648cd3219f
	I1025 18:02:24.557831   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:24.557836   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:24.557842   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:24.557846   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:24.558307   70293 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"coredns-5dd5756b68-cvn82","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b00548f2-a206-488a-9e2b-45f2e1066597","resourceVersion":"387","creationTimestamp":"2023-10-26T01:02:22Z","deletionTimestamp":"2023-10-26T01:02:52Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0dc1c1d5-d0f7-41f7-962e-a321b5fe
4f6e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{ [truncated 61875 chars]
	I1025 18:02:24.559752   70293 system_pods.go:86] 9 kube-system pods found
	I1025 18:02:24.559765   70293 system_pods.go:89] "coredns-5dd5756b68-cvn82" [b00548f2-a206-488a-9e2b-45f2e1066597] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:24.559771   70293 system_pods.go:89] "coredns-5dd5756b68-vm8jw" [8747ca8b-8044-46a8-a5bd-700e0fb6ceb8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:24.559775   70293 system_pods.go:89] "etcd-multinode-971000" [686f24fe-a02b-4a6b-8790-b0d2628424c1] Running
	I1025 18:02:24.559799   70293 system_pods.go:89] "kindnet-5txks" [5b661079-5482-4abd-8420-09db800cc9b5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 18:02:24.559807   70293 system_pods.go:89] "kube-apiserver-multinode-971000" [b4400411-c3b7-408c-b79f-a2e005efbef3] Running
	I1025 18:02:24.559811   70293 system_pods.go:89] "kube-controller-manager-multinode-971000" [6347ae2f-f5d5-4533-8b15-4cb194fd7c75] Running
	I1025 18:02:24.559817   70293 system_pods.go:89] "kube-proxy-2dzxx" [449549c6-a5cd-4468-b565-55811bb44448] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 18:02:24.559821   70293 system_pods.go:89] "kube-scheduler-multinode-971000" [411ae656-7e8b-4e4e-892e-9873855be79f] Running
	I1025 18:02:24.559828   70293 system_pods.go:89] "storage-provisioner" [8a6d679a-a32e-4707-ad40-063155cf0cde] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 18:02:24.559838   70293 retry.go:31] will retry after 339.074022ms: missing components: kube-dns, kube-proxy
	I1025 18:02:24.899119   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:24.899144   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:24.899156   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:24.899166   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:24.904290   70293 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1025 18:02:24.904302   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:24.904307   70293 round_trippers.go:580]     Audit-Id: ed49c1e9-65dc-45a2-8591-39897fc51024
	I1025 18:02:24.904312   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:24.904316   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:24.904321   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:24.904326   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:24.904333   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:24 GMT
	I1025 18:02:24.905385   70293 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vm8jw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8747ca8b-8044-46a8-a5bd-700e0fb6ceb8","resourceVersion":"419","creationTimestamp":"2023-10-26T01:02:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55197 chars]
	I1025 18:02:24.906624   70293 system_pods.go:86] 8 kube-system pods found
	I1025 18:02:24.906635   70293 system_pods.go:89] "coredns-5dd5756b68-vm8jw" [8747ca8b-8044-46a8-a5bd-700e0fb6ceb8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:02:24.906641   70293 system_pods.go:89] "etcd-multinode-971000" [686f24fe-a02b-4a6b-8790-b0d2628424c1] Running
	I1025 18:02:24.906646   70293 system_pods.go:89] "kindnet-5txks" [5b661079-5482-4abd-8420-09db800cc9b5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1025 18:02:24.906651   70293 system_pods.go:89] "kube-apiserver-multinode-971000" [b4400411-c3b7-408c-b79f-a2e005efbef3] Running
	I1025 18:02:24.906656   70293 system_pods.go:89] "kube-controller-manager-multinode-971000" [6347ae2f-f5d5-4533-8b15-4cb194fd7c75] Running
	I1025 18:02:24.906673   70293 system_pods.go:89] "kube-proxy-2dzxx" [449549c6-a5cd-4468-b565-55811bb44448] Running
	I1025 18:02:24.906684   70293 system_pods.go:89] "kube-scheduler-multinode-971000" [411ae656-7e8b-4e4e-892e-9873855be79f] Running
	I1025 18:02:24.906689   70293 system_pods.go:89] "storage-provisioner" [8a6d679a-a32e-4707-ad40-063155cf0cde] Running
	I1025 18:02:24.906700   70293 system_pods.go:126] duration metric: took 841.409472ms to wait for k8s-apps to be running ...
	I1025 18:02:24.906706   70293 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 18:02:24.906757   70293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:02:24.918193   70293 system_svc.go:56] duration metric: took 11.481929ms WaitForService to wait for kubelet.
	I1025 18:02:24.918206   70293 kubeadm.go:581] duration metric: took 3.065432195s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 18:02:24.918225   70293 node_conditions.go:102] verifying NodePressure condition ...
	I1025 18:02:24.918266   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes
	I1025 18:02:24.918271   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:24.918277   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:24.918283   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:24.920776   70293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 18:02:24.920793   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:24.920799   70293 round_trippers.go:580]     Audit-Id: bc96475b-c729-4d89-b157-a27a441dcac1
	I1025 18:02:24.920806   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:24.920812   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:24.920819   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:24.920826   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:24.920831   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:24 GMT
	I1025 18:02:24.920887   70293 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"342","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4840 chars]
	I1025 18:02:24.921102   70293 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:02:24.921115   70293 node_conditions.go:123] node cpu capacity is 12
	I1025 18:02:24.921126   70293 node_conditions.go:105] duration metric: took 2.896923ms to run NodePressure ...
	I1025 18:02:24.921133   70293 start.go:228] waiting for startup goroutines ...
	I1025 18:02:24.921138   70293 start.go:233] waiting for cluster config update ...
	I1025 18:02:24.921149   70293 start.go:242] writing updated cluster config ...
	I1025 18:02:24.944703   70293 out.go:177] 
	I1025 18:02:24.981892   70293 config.go:182] Loaded profile config "multinode-971000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:02:24.981983   70293 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/config.json ...
	I1025 18:02:25.004569   70293 out.go:177] * Starting worker node multinode-971000-m02 in cluster multinode-971000
	I1025 18:02:25.048634   70293 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 18:02:25.069534   70293 out.go:177] * Pulling base image ...
	I1025 18:02:25.111807   70293 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:02:25.111845   70293 cache.go:56] Caching tarball of preloaded images
	I1025 18:02:25.111899   70293 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 18:02:25.112046   70293 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 18:02:25.112068   70293 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 18:02:25.112166   70293 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/config.json ...
	I1025 18:02:25.165159   70293 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 18:02:25.165184   70293 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 18:02:25.165202   70293 cache.go:194] Successfully downloaded all kic artifacts
	I1025 18:02:25.165247   70293 start.go:365] acquiring machines lock for multinode-971000-m02: {Name:mk4eee4b27ca9a49e69024591cda98f7d3ec6bc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:02:25.165393   70293 start.go:369] acquired machines lock for "multinode-971000-m02" in 134.771µs
	I1025 18:02:25.165417   70293 start.go:93] Provisioning new machine with config: &{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 18:02:25.165492   70293 start.go:125] createHost starting for "m02" (driver="docker")
	I1025 18:02:25.188234   70293 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 18:02:25.188298   70293 start.go:159] libmachine.API.Create for "multinode-971000" (driver="docker")
	I1025 18:02:25.188312   70293 client.go:168] LocalClient.Create starting
	I1025 18:02:25.188396   70293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem
	I1025 18:02:25.188444   70293 main.go:141] libmachine: Decoding PEM data...
	I1025 18:02:25.188457   70293 main.go:141] libmachine: Parsing certificate...
	I1025 18:02:25.188506   70293 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem
	I1025 18:02:25.188541   70293 main.go:141] libmachine: Decoding PEM data...
	I1025 18:02:25.188549   70293 main.go:141] libmachine: Parsing certificate...
	I1025 18:02:25.209386   70293 cli_runner.go:164] Run: docker network inspect multinode-971000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 18:02:25.309865   70293 network_create.go:77] Found existing network {name:multinode-971000 subnet:0xc003c69bf0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:65535}
	I1025 18:02:25.309914   70293 kic.go:118] calculated static IP "192.168.58.3" for the "multinode-971000-m02" container
	I1025 18:02:25.310034   70293 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 18:02:25.365203   70293 cli_runner.go:164] Run: docker volume create multinode-971000-m02 --label name.minikube.sigs.k8s.io=multinode-971000-m02 --label created_by.minikube.sigs.k8s.io=true
	I1025 18:02:25.423961   70293 oci.go:103] Successfully created a docker volume multinode-971000-m02
	I1025 18:02:25.424116   70293 cli_runner.go:164] Run: docker run --rm --name multinode-971000-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-971000-m02 --entrypoint /usr/bin/test -v multinode-971000-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 18:02:25.960659   70293 oci.go:107] Successfully prepared a docker volume multinode-971000-m02
	I1025 18:02:25.960695   70293 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:02:25.960707   70293 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 18:02:25.960858   70293 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-971000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 18:02:28.851198   70293 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-971000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (2.890182667s)
	I1025 18:02:28.851226   70293 kic.go:200] duration metric: took 2.890428 seconds to extract preloaded images to volume
	I1025 18:02:28.851341   70293 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 18:02:28.966671   70293 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-971000-m02 --name multinode-971000-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-971000-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-971000-m02 --network multinode-971000 --ip 192.168.58.3 --volume multinode-971000-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 18:02:29.292433   70293 cli_runner.go:164] Run: docker container inspect multinode-971000-m02 --format={{.State.Running}}
	I1025 18:02:29.358427   70293 cli_runner.go:164] Run: docker container inspect multinode-971000-m02 --format={{.State.Status}}
	I1025 18:02:29.424626   70293 cli_runner.go:164] Run: docker exec multinode-971000-m02 stat /var/lib/dpkg/alternatives/iptables
	I1025 18:02:29.545353   70293 oci.go:144] the created container "multinode-971000-m02" has a running status.
	I1025 18:02:29.545387   70293 kic.go:222] Creating ssh key for kic: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa...
	I1025 18:02:29.917368   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1025 18:02:29.917418   70293 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 18:02:29.989729   70293 cli_runner.go:164] Run: docker container inspect multinode-971000-m02 --format={{.State.Status}}
	I1025 18:02:30.055055   70293 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 18:02:30.055085   70293 kic_runner.go:114] Args: [docker exec --privileged multinode-971000-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 18:02:30.179108   70293 cli_runner.go:164] Run: docker container inspect multinode-971000-m02 --format={{.State.Status}}
	I1025 18:02:30.237150   70293 machine.go:88] provisioning docker machine ...
	I1025 18:02:30.237188   70293 ubuntu.go:169] provisioning hostname "multinode-971000-m02"
	I1025 18:02:30.237303   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:30.346663   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:02:30.347069   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57119 <nil> <nil>}
	I1025 18:02:30.347080   70293 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-971000-m02 && echo "multinode-971000-m02" | sudo tee /etc/hostname
	I1025 18:02:30.483876   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-971000-m02
	
	I1025 18:02:30.483988   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:30.540867   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:02:30.541238   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57119 <nil> <nil>}
	I1025 18:02:30.541273   70293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-971000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-971000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-971000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:02:30.666280   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:02:30.666337   70293 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 18:02:30.666349   70293 ubuntu.go:177] setting up certificates
	I1025 18:02:30.666360   70293 provision.go:83] configureAuth start
	I1025 18:02:30.666470   70293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000-m02
	I1025 18:02:30.725402   70293 provision.go:138] copyHostCerts
	I1025 18:02:30.725448   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:02:30.725504   70293 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 18:02:30.725510   70293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:02:30.725652   70293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 18:02:30.725864   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:02:30.725893   70293 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 18:02:30.725898   70293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:02:30.726014   70293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 18:02:30.726191   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:02:30.726228   70293 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 18:02:30.726234   70293 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:02:30.726350   70293 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 18:02:30.726536   70293 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.multinode-971000-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-971000-m02]
	I1025 18:02:31.282455   70293 provision.go:172] copyRemoteCerts
	I1025 18:02:31.282518   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:02:31.282573   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:31.339217   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57119 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa Username:docker}
	I1025 18:02:31.432391   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1025 18:02:31.432471   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 18:02:31.457280   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1025 18:02:31.457390   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1025 18:02:31.482258   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1025 18:02:31.482335   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 18:02:31.506755   70293 provision.go:86] duration metric: configureAuth took 840.360687ms
	I1025 18:02:31.506777   70293 ubuntu.go:193] setting minikube options for container-runtime
	I1025 18:02:31.506945   70293 config.go:182] Loaded profile config "multinode-971000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:02:31.507055   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:31.567137   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:02:31.567444   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57119 <nil> <nil>}
	I1025 18:02:31.567457   70293 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:02:31.692926   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 18:02:31.692944   70293 ubuntu.go:71] root file system type: overlay
	I1025 18:02:31.693094   70293 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:02:31.693197   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:31.753838   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:02:31.754239   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57119 <nil> <nil>}
	I1025 18:02:31.754300   70293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:02:31.889139   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:02:31.889244   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:31.947028   70293 main.go:141] libmachine: Using SSH client type: native
	I1025 18:02:31.947369   70293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 57119 <nil> <nil>}
	I1025 18:02:31.947386   70293 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:02:32.620831   70293 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:02:31.886155258 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 18:02:32.620859   70293 machine.go:91] provisioned docker machine in 2.3836145s
	I1025 18:02:32.620867   70293 client.go:171] LocalClient.Create took 7.432327075s
	I1025 18:02:32.620887   70293 start.go:167] duration metric: libmachine.API.Create for "multinode-971000" took 7.432365189s
	I1025 18:02:32.620892   70293 start.go:300] post-start starting for "multinode-971000-m02" (driver="docker")
	I1025 18:02:32.620899   70293 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:02:32.620967   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:02:32.621055   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:32.681244   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57119 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa Username:docker}
	I1025 18:02:32.775550   70293 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:02:32.780672   70293 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1025 18:02:32.780682   70293 command_runner.go:130] > NAME="Ubuntu"
	I1025 18:02:32.780687   70293 command_runner.go:130] > VERSION_ID="22.04"
	I1025 18:02:32.780692   70293 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1025 18:02:32.780696   70293 command_runner.go:130] > VERSION_CODENAME=jammy
	I1025 18:02:32.780699   70293 command_runner.go:130] > ID=ubuntu
	I1025 18:02:32.780703   70293 command_runner.go:130] > ID_LIKE=debian
	I1025 18:02:32.780710   70293 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1025 18:02:32.780715   70293 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1025 18:02:32.780722   70293 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1025 18:02:32.780730   70293 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1025 18:02:32.780735   70293 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1025 18:02:32.780790   70293 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 18:02:32.780817   70293 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 18:02:32.780827   70293 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 18:02:32.780832   70293 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 18:02:32.780839   70293 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 18:02:32.780945   70293 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 18:02:32.781206   70293 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 18:02:32.781214   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /etc/ssl/certs/652922.pem
	I1025 18:02:32.781402   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:02:32.792039   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:02:32.817649   70293 start.go:303] post-start completed in 196.74227ms
	I1025 18:02:32.818255   70293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000-m02
	I1025 18:02:32.877324   70293 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/config.json ...
	I1025 18:02:32.877792   70293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 18:02:32.877859   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:32.938631   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57119 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa Username:docker}
	I1025 18:02:33.025895   70293 command_runner.go:130] > 7%!
	(MISSING)I1025 18:02:33.025989   70293 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 18:02:33.031599   70293 command_runner.go:130] > 91G
	I1025 18:02:33.031937   70293 start.go:128] duration metric: createHost completed in 7.866199714s
	I1025 18:02:33.031959   70293 start.go:83] releasing machines lock for "multinode-971000-m02", held for 7.866319112s
	I1025 18:02:33.032092   70293 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000-m02
	I1025 18:02:33.123810   70293 out.go:177] * Found network options:
	I1025 18:02:33.165532   70293 out.go:177]   - NO_PROXY=192.168.58.2
	W1025 18:02:33.186619   70293 proxy.go:119] fail to check proxy env: Error ip not in block
	W1025 18:02:33.186656   70293 proxy.go:119] fail to check proxy env: Error ip not in block
	I1025 18:02:33.186752   70293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 18:02:33.186765   70293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:02:33.186812   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:33.186838   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:02:33.252312   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57119 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa Username:docker}
	I1025 18:02:33.252778   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57119 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa Username:docker}
	I1025 18:02:33.445226   70293 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1025 18:02:33.447073   70293 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1025 18:02:33.447096   70293 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1025 18:02:33.447106   70293 command_runner.go:130] > Device: 10002bh/1048619d	Inode: 1048758     Links: 1
	I1025 18:02:33.447114   70293 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 18:02:33.447133   70293 command_runner.go:130] > Access: 2023-10-26 00:39:30.354217175 +0000
	I1025 18:02:33.447142   70293 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1025 18:02:33.447147   70293 command_runner.go:130] > Change: 2023-10-26 00:39:14.867105012 +0000
	I1025 18:02:33.447152   70293 command_runner.go:130] >  Birth: 2023-10-26 00:39:14.867105012 +0000
	I1025 18:02:33.447228   70293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 18:02:33.475270   70293 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 18:02:33.475367   70293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 18:02:33.504673   70293 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1025 18:02:33.504708   70293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1025 18:02:33.504718   70293 start.go:472] detecting cgroup driver to use...
	I1025 18:02:33.504736   70293 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:02:33.504814   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:02:33.524012   70293 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1025 18:02:33.524128   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 18:02:33.536192   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:02:33.548193   70293 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:02:33.548269   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:02:33.560171   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:02:33.572909   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:02:33.585237   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:02:33.597134   70293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:02:33.608204   70293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:02:33.619829   70293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:02:33.629721   70293 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1025 18:02:33.630500   70293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:02:33.641283   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:02:33.714029   70293 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:02:33.803883   70293 start.go:472] detecting cgroup driver to use...
	I1025 18:02:33.803907   70293 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:02:33.803984   70293 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:02:33.818151   70293 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1025 18:02:33.818202   70293 command_runner.go:130] > [Unit]
	I1025 18:02:33.818216   70293 command_runner.go:130] > Description=Docker Application Container Engine
	I1025 18:02:33.818225   70293 command_runner.go:130] > Documentation=https://docs.docker.com
	I1025 18:02:33.818234   70293 command_runner.go:130] > BindsTo=containerd.service
	I1025 18:02:33.818246   70293 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1025 18:02:33.818256   70293 command_runner.go:130] > Wants=network-online.target
	I1025 18:02:33.818264   70293 command_runner.go:130] > Requires=docker.socket
	I1025 18:02:33.818275   70293 command_runner.go:130] > StartLimitBurst=3
	I1025 18:02:33.818288   70293 command_runner.go:130] > StartLimitIntervalSec=60
	I1025 18:02:33.818308   70293 command_runner.go:130] > [Service]
	I1025 18:02:33.818319   70293 command_runner.go:130] > Type=notify
	I1025 18:02:33.818329   70293 command_runner.go:130] > Restart=on-failure
	I1025 18:02:33.818336   70293 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I1025 18:02:33.818351   70293 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1025 18:02:33.818364   70293 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1025 18:02:33.818374   70293 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1025 18:02:33.818383   70293 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1025 18:02:33.818394   70293 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1025 18:02:33.818404   70293 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1025 18:02:33.818415   70293 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1025 18:02:33.818442   70293 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1025 18:02:33.818458   70293 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1025 18:02:33.818466   70293 command_runner.go:130] > ExecStart=
	I1025 18:02:33.818488   70293 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1025 18:02:33.818503   70293 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1025 18:02:33.818515   70293 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1025 18:02:33.818525   70293 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1025 18:02:33.818541   70293 command_runner.go:130] > LimitNOFILE=infinity
	I1025 18:02:33.818554   70293 command_runner.go:130] > LimitNPROC=infinity
	I1025 18:02:33.818565   70293 command_runner.go:130] > LimitCORE=infinity
	I1025 18:02:33.818578   70293 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1025 18:02:33.818586   70293 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1025 18:02:33.818594   70293 command_runner.go:130] > TasksMax=infinity
	I1025 18:02:33.818605   70293 command_runner.go:130] > TimeoutStartSec=0
	I1025 18:02:33.818621   70293 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1025 18:02:33.818633   70293 command_runner.go:130] > Delegate=yes
	I1025 18:02:33.818647   70293 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1025 18:02:33.818652   70293 command_runner.go:130] > KillMode=process
	I1025 18:02:33.818658   70293 command_runner.go:130] > [Install]
	I1025 18:02:33.818665   70293 command_runner.go:130] > WantedBy=multi-user.target
	I1025 18:02:33.819864   70293 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 18:02:33.819981   70293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:02:33.836746   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:02:33.859185   70293 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1025 18:02:33.860666   70293 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:02:33.873678   70293 command_runner.go:130] > /usr/bin/cri-dockerd
	I1025 18:02:33.873806   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:02:33.887293   70293 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:02:33.909996   70293 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:02:34.013385   70293 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:02:34.109158   70293 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:02:34.109193   70293 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:02:34.130833   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:02:34.215763   70293 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:02:34.495652   70293 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:02:34.563759   70293 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I1025 18:02:34.563835   70293 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 18:02:34.634942   70293 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:02:34.703302   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:02:34.770025   70293 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 18:02:34.802619   70293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:02:34.870199   70293 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 18:02:34.968969   70293 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 18:02:34.969131   70293 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 18:02:34.975888   70293 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1025 18:02:34.975903   70293 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1025 18:02:34.975909   70293 command_runner.go:130] > Device: 100033h/1048627d	Inode: 267         Links: 1
	I1025 18:02:34.975916   70293 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1025 18:02:34.975924   70293 command_runner.go:130] > Access: 2023-10-26 01:02:34.881155431 +0000
	I1025 18:02:34.975930   70293 command_runner.go:130] > Modify: 2023-10-26 01:02:34.881155431 +0000
	I1025 18:02:34.975935   70293 command_runner.go:130] > Change: 2023-10-26 01:02:34.897155432 +0000
	I1025 18:02:34.975939   70293 command_runner.go:130] >  Birth: 2023-10-26 01:02:34.881155431 +0000
	I1025 18:02:34.975951   70293 start.go:540] Will wait 60s for crictl version
	I1025 18:02:34.976009   70293 ssh_runner.go:195] Run: which crictl
	I1025 18:02:34.981666   70293 command_runner.go:130] > /usr/bin/crictl
	I1025 18:02:34.981876   70293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 18:02:35.033300   70293 command_runner.go:130] > Version:  0.1.0
	I1025 18:02:35.033313   70293 command_runner.go:130] > RuntimeName:  docker
	I1025 18:02:35.033317   70293 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1025 18:02:35.033321   70293 command_runner.go:130] > RuntimeApiVersion:  v1
	I1025 18:02:35.035485   70293 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 18:02:35.035571   70293 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:02:35.064851   70293 command_runner.go:130] > 24.0.6
	I1025 18:02:35.066266   70293 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:02:35.094067   70293 command_runner.go:130] > 24.0.6
	I1025 18:02:35.115994   70293 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 18:02:35.159918   70293 out.go:177]   - env NO_PROXY=192.168.58.2
	I1025 18:02:35.180769   70293 cli_runner.go:164] Run: docker exec -t multinode-971000-m02 dig +short host.docker.internal
	I1025 18:02:35.315203   70293 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 18:02:35.315312   70293 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 18:02:35.321041   70293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:02:35.334465   70293 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000 for IP: 192.168.58.3
	I1025 18:02:35.334483   70293 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:02:35.334680   70293 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 18:02:35.334743   70293 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 18:02:35.334753   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1025 18:02:35.334775   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1025 18:02:35.334791   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1025 18:02:35.334812   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1025 18:02:35.334911   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 18:02:35.334977   70293 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 18:02:35.335005   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 18:02:35.335094   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 18:02:35.335184   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:02:35.335269   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 18:02:35.335383   70293 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:02:35.335450   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem -> /usr/share/ca-certificates/65292.pem
	I1025 18:02:35.335483   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> /usr/share/ca-certificates/652922.pem
	I1025 18:02:35.335528   70293 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:35.335869   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:02:35.362135   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 18:02:35.388274   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:02:35.414748   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:02:35.440939   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 18:02:35.466113   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 18:02:35.492088   70293 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:02:35.518217   70293 ssh_runner.go:195] Run: openssl version
	I1025 18:02:35.524493   70293 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1025 18:02:35.524763   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 18:02:35.537094   70293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 18:02:35.542243   70293 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:02:35.542269   70293 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:02:35.542325   70293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 18:02:35.550279   70293 command_runner.go:130] > 51391683
	I1025 18:02:35.550373   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 18:02:35.562541   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 18:02:35.574057   70293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 18:02:35.579313   70293 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:02:35.579331   70293 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:02:35.579396   70293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 18:02:35.587804   70293 command_runner.go:130] > 3ec20f2e
	I1025 18:02:35.587891   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:02:35.599703   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:02:35.611006   70293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:35.615982   70293 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:35.616014   70293 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:35.616076   70293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:02:35.624105   70293 command_runner.go:130] > b5213941
	I1025 18:02:35.624361   70293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:02:35.636806   70293 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 18:02:35.642063   70293 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 18:02:35.642087   70293 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 18:02:35.642206   70293 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:02:35.707300   70293 command_runner.go:130] > cgroupfs
	I1025 18:02:35.708700   70293 cni.go:84] Creating CNI manager for ""
	I1025 18:02:35.708711   70293 cni.go:136] 2 nodes found, recommending kindnet
	I1025 18:02:35.708722   70293 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 18:02:35.708737   70293 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-971000 NodeName:multinode-971000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 18:02:35.708844   70293 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-971000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:02:35.708888   70293 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-971000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 18:02:35.708955   70293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 18:02:35.719131   70293 command_runner.go:130] > kubeadm
	I1025 18:02:35.719167   70293 command_runner.go:130] > kubectl
	I1025 18:02:35.719177   70293 command_runner.go:130] > kubelet
	I1025 18:02:35.720024   70293 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:02:35.720096   70293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1025 18:02:35.731061   70293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I1025 18:02:35.751271   70293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:02:35.772282   70293 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1025 18:02:35.777942   70293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:02:35.791282   70293 host.go:66] Checking if "multinode-971000" exists ...
	I1025 18:02:35.791464   70293 config.go:182] Loaded profile config "multinode-971000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:02:35.791488   70293 start.go:304] JoinCluster: &{Name:multinode-971000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-971000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:02:35.791560   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1025 18:02:35.791622   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:02:35.850331   70293 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:02:36.003294   70293 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token hw8z7i.u3ykeij10qe0tbqv --discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d 
	I1025 18:02:36.003330   70293 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 18:02:36.003349   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hw8z7i.u3ykeij10qe0tbqv --discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-971000-m02"
	I1025 18:02:36.043621   70293 command_runner.go:130] > [preflight] Running pre-flight checks
	I1025 18:02:36.192809   70293 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1025 18:02:36.192838   70293 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1025 18:02:36.226540   70293 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:02:36.226563   70293 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:02:36.226569   70293 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1025 18:02:36.308470   70293 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1025 18:02:37.823920   70293 command_runner.go:130] > This node has joined the cluster:
	I1025 18:02:37.823934   70293 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1025 18:02:37.823940   70293 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1025 18:02:37.823945   70293 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1025 18:02:37.826512   70293 command_runner.go:130] ! W1026 01:02:36.042439    1503 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1025 18:02:37.826524   70293 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 18:02:37.826542   70293 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:02:37.826552   70293 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hw8z7i.u3ykeij10qe0tbqv --discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-971000-m02": (1.82314044s)
	I1025 18:02:37.826569   70293 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1025 18:02:37.960538   70293 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1025 18:02:37.960559   70293 start.go:306] JoinCluster complete in 2.1690061s
	I1025 18:02:37.960570   70293 cni.go:84] Creating CNI manager for ""
	I1025 18:02:37.960578   70293 cni.go:136] 2 nodes found, recommending kindnet
	I1025 18:02:37.960664   70293 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 18:02:37.966120   70293 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1025 18:02:37.966137   70293 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1025 18:02:37.966145   70293 command_runner.go:130] > Device: a4h/164d	Inode: 1049408     Links: 1
	I1025 18:02:37.966157   70293 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1025 18:02:37.966179   70293 command_runner.go:130] > Access: 2023-10-26 00:39:30.623217190 +0000
	I1025 18:02:37.966188   70293 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1025 18:02:37.966196   70293 command_runner.go:130] > Change: 2023-10-26 00:39:15.549105052 +0000
	I1025 18:02:37.966208   70293 command_runner.go:130] >  Birth: 2023-10-26 00:39:15.509105049 +0000
	I1025 18:02:37.966250   70293 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1025 18:02:37.966256   70293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1025 18:02:37.986190   70293 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 18:02:38.228114   70293 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1025 18:02:38.233031   70293 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1025 18:02:38.235932   70293 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1025 18:02:38.247397   70293 command_runner.go:130] > daemonset.apps/kindnet configured
	I1025 18:02:38.252480   70293 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:02:38.252743   70293 kapi.go:59] client config for multinode-971000: &rest.Config{Host:"https://127.0.0.1:57083", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:02:38.253193   70293 round_trippers.go:463] GET https://127.0.0.1:57083/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1025 18:02:38.253206   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.253216   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.253222   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.256153   70293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 18:02:38.256170   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.256180   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.256187   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.256198   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.256206   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.256213   70293 round_trippers.go:580]     Content-Length: 291
	I1025 18:02:38.256221   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.256232   70293 round_trippers.go:580]     Audit-Id: 5db82f93-1d7c-450b-8ca8-257f41c6259e
	I1025 18:02:38.256261   70293 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"929058e7-d591-423d-8b82-e048f4d0d834","resourceVersion":"485","creationTimestamp":"2023-10-26T01:02:09Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1025 18:02:38.256358   70293 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-971000" context rescaled to 1 replicas
	I1025 18:02:38.256381   70293 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1025 18:02:38.317325   70293 out.go:177] * Verifying Kubernetes components...
	I1025 18:02:38.338555   70293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:02:38.351792   70293 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:02:38.415325   70293 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:02:38.415574   70293 kapi.go:59] client config for multinode-971000: &rest.Config{Host:"https://127.0.0.1:57083", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/multinode-971000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:02:38.415849   70293 node_ready.go:35] waiting up to 6m0s for node "multinode-971000-m02" to be "Ready" ...
	I1025 18:02:38.415907   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000-m02
	I1025 18:02:38.415913   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.415920   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.415932   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.420281   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:38.420299   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.420305   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.420310   70293 round_trippers.go:580]     Audit-Id: a3f928ea-b736-4382-989b-2d9c23cf87ab
	I1025 18:02:38.420315   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.420320   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.420325   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.420329   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.420423   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000-m02","uid":"7897eeaa-223d-4777-9f20-9231836b81c9","resourceVersion":"486","creationTimestamp":"2023-10-26T01:02:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:36Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4016 chars]
	I1025 18:02:38.420639   70293 node_ready.go:49] node "multinode-971000-m02" has status "Ready":"True"
	I1025 18:02:38.420648   70293 node_ready.go:38] duration metric: took 4.788532ms waiting for node "multinode-971000-m02" to be "Ready" ...
	I1025 18:02:38.420659   70293 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 18:02:38.420720   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods
	I1025 18:02:38.420727   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.420733   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.420739   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.425231   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:38.425255   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.425284   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.425292   70293 round_trippers.go:580]     Audit-Id: 5935a45b-df76-4d1f-a2f7-1878083de854
	I1025 18:02:38.425298   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.425303   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.425325   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.425331   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.426423   70293 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"coredns-5dd5756b68-vm8jw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8747ca8b-8044-46a8-a5bd-700e0fb6ceb8","resourceVersion":"481","creationTimestamp":"2023-10-26T01:02:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68697 chars]
	I1025 18:02:38.428369   70293 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vm8jw" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.428421   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-vm8jw
	I1025 18:02:38.428426   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.428434   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.428441   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.431720   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:38.431733   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.431739   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.431744   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.431749   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.431754   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.431761   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.431766   70293 round_trippers.go:580]     Audit-Id: 4a971ffe-20b5-4360-87ae-3c3dcaa3d8bc
	I1025 18:02:38.431837   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-vm8jw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"8747ca8b-8044-46a8-a5bd-700e0fb6ceb8","resourceVersion":"481","creationTimestamp":"2023-10-26T01:02:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0dc1c1d5-d0f7-41f7-962e-a321b5fe4f6e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6153 chars]
	I1025 18:02:38.432133   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:38.432148   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.432162   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.432178   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.435356   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:38.435373   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.435380   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.435386   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.435391   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.435400   70293 round_trippers.go:580]     Audit-Id: c8291aa1-6017-403e-9904-4c8632fd5108
	I1025 18:02:38.435406   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.435412   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.435629   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:38.435867   70293 pod_ready.go:92] pod "coredns-5dd5756b68-vm8jw" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:38.435877   70293 pod_ready.go:81] duration metric: took 7.494994ms waiting for pod "coredns-5dd5756b68-vm8jw" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.435889   70293 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.435951   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/etcd-multinode-971000
	I1025 18:02:38.435964   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.435973   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.435982   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.439288   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:38.439302   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.439309   70293 round_trippers.go:580]     Audit-Id: 159df4b7-72f6-46b2-8a66-82f933870368
	I1025 18:02:38.439318   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.439326   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.439331   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.439336   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.439342   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.439431   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-971000","namespace":"kube-system","uid":"686f24fe-a02b-4a6b-8790-b0d2628424c1","resourceVersion":"353","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.mirror":"ac68735fb44e9f4f7a911f67dde542b7","kubernetes.io/config.seen":"2023-10-26T01:02:09.640585120Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I1025 18:02:38.439722   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:38.439730   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.439739   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.439747   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.443024   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:38.443036   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.443043   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.443049   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.443055   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.443060   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.443066   70293 round_trippers.go:580]     Audit-Id: 51f86fbc-549e-460f-bee3-a69df57041ec
	I1025 18:02:38.443072   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.443164   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:38.443437   70293 pod_ready.go:92] pod "etcd-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:38.443447   70293 pod_ready.go:81] duration metric: took 7.550706ms waiting for pod "etcd-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.443457   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.443500   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-971000
	I1025 18:02:38.443505   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.443511   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.443517   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.447120   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:38.447134   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.447142   70293 round_trippers.go:580]     Audit-Id: 3a646bc3-2ecb-4b9f-8b5b-dc28fc310542
	I1025 18:02:38.447151   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.447163   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.447176   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.447185   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.447193   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.447397   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-971000","namespace":"kube-system","uid":"b4400411-c3b7-408c-b79f-a2e005efbef3","resourceVersion":"378","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"3673709ea844b9ea542719bd93b9f9af","kubernetes.io/config.mirror":"3673709ea844b9ea542719bd93b9f9af","kubernetes.io/config.seen":"2023-10-26T01:02:09.640588239Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8238 chars]
	I1025 18:02:38.447710   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:38.447718   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.447726   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.447731   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.450636   70293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 18:02:38.450649   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.450655   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.450660   70293 round_trippers.go:580]     Audit-Id: 9098fdf8-88ca-46e3-8cb5-f48560dcf82d
	I1025 18:02:38.450665   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.450673   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.450679   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.450683   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.450748   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:38.450937   70293 pod_ready.go:92] pod "kube-apiserver-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:38.450946   70293 pod_ready.go:81] duration metric: took 7.481952ms waiting for pod "kube-apiserver-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.450954   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.450994   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-971000
	I1025 18:02:38.450999   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.451006   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.451011   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.453918   70293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 18:02:38.453929   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.453934   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.453939   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.453944   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.453949   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.453953   70293 round_trippers.go:580]     Audit-Id: 3b8ab7b4-be87-4eba-9f63-4d4149fdc7a1
	I1025 18:02:38.453959   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.454177   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-971000","namespace":"kube-system","uid":"6347ae2f-f5d5-4533-8b15-4cb194fd7c75","resourceVersion":"392","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.mirror":"5acee29fb5b4c1cdef0b50107458d961","kubernetes.io/config.seen":"2023-10-26T01:02:09.640589032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7813 chars]
	I1025 18:02:38.454495   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:38.454507   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.454513   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.454519   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.457083   70293 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1025 18:02:38.457095   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.457101   70293 round_trippers.go:580]     Audit-Id: 40cb8917-25dd-4174-b5a0-29b49fe2afdf
	I1025 18:02:38.457107   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.457112   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.457118   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.457122   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.457127   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.457184   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:38.457431   70293 pod_ready.go:92] pod "kube-controller-manager-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:38.457441   70293 pod_ready.go:81] duration metric: took 6.480707ms waiting for pod "kube-controller-manager-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.457454   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2dzxx" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.617314   70293 request.go:629] Waited for 159.763178ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-proxy-2dzxx
	I1025 18:02:38.617473   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-proxy-2dzxx
	I1025 18:02:38.617484   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.617502   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.617513   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.622186   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:38.622198   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.622204   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.622208   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.622213   70293 round_trippers.go:580]     Audit-Id: 45be6e02-588b-46e2-9c20-beb92129cb1e
	I1025 18:02:38.622219   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.622225   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.622229   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.622293   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2dzxx","generateName":"kube-proxy-","namespace":"kube-system","uid":"449549c6-a5cd-4468-b565-55811bb44448","resourceVersion":"421","creationTimestamp":"2023-10-26T01:02:22Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9df5e1d-1006-43e9-a993-70229a126a7e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9df5e1d-1006-43e9-a993-70229a126a7e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5528 chars]
	I1025 18:02:38.815996   70293 request.go:629] Waited for 193.435151ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:38.816032   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:38.816038   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:38.816047   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:38.816086   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:38.819185   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:38.819199   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:38.819207   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:38.819214   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:38.819219   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:38.819224   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:38.819228   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:38 GMT
	I1025 18:02:38.819232   70293 round_trippers.go:580]     Audit-Id: f363f67c-bf71-43bd-b016-50d60173450c
	I1025 18:02:38.819301   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:38.819509   70293 pod_ready.go:92] pod "kube-proxy-2dzxx" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:38.819519   70293 pod_ready.go:81] duration metric: took 362.049067ms waiting for pod "kube-proxy-2dzxx" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:38.819525   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qbx49" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:39.016692   70293 request.go:629] Waited for 197.099423ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-proxy-qbx49
	I1025 18:02:39.016791   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-proxy-qbx49
	I1025 18:02:39.016802   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:39.016813   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:39.016824   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:39.020580   70293 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1025 18:02:39.020591   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:39.020596   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:39 GMT
	I1025 18:02:39.020601   70293 round_trippers.go:580]     Audit-Id: 42c02b9a-9700-4c6a-9a4b-dc4ab5a93d5b
	I1025 18:02:39.020606   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:39.020611   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:39.020619   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:39.020624   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:39.020682   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qbx49","generateName":"kube-proxy-","namespace":"kube-system","uid":"0870cc92-6113-421d-9cd5-08a2ca23e892","resourceVersion":"494","creationTimestamp":"2023-10-26T01:02:36Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9df5e1d-1006-43e9-a993-70229a126a7e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9df5e1d-1006-43e9-a993-70229a126a7e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I1025 18:02:39.217468   70293 request.go:629] Waited for 196.458334ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/nodes/multinode-971000-m02
	I1025 18:02:39.217516   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000-m02
	I1025 18:02:39.217525   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:39.217538   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:39.217550   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:39.222023   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:39.222041   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:39.222047   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:39.222052   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:39 GMT
	I1025 18:02:39.222058   70293 round_trippers.go:580]     Audit-Id: f8e20068-77d6-4fe7-87f9-197461b739e5
	I1025 18:02:39.222062   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:39.222067   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:39.222074   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:39.222129   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000-m02","uid":"7897eeaa-223d-4777-9f20-9231836b81c9","resourceVersion":"486","creationTimestamp":"2023-10-26T01:02:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:36Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4016 chars]
	I1025 18:02:39.222306   70293 pod_ready.go:92] pod "kube-proxy-qbx49" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:39.222314   70293 pod_ready.go:81] duration metric: took 402.770963ms waiting for pod "kube-proxy-qbx49" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:39.222319   70293 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:39.418003   70293 request.go:629] Waited for 195.635696ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-971000
	I1025 18:02:39.418116   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-971000
	I1025 18:02:39.418127   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:39.418138   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:39.418161   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:39.422724   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:39.422735   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:39.422744   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:39.422748   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:39 GMT
	I1025 18:02:39.422754   70293 round_trippers.go:580]     Audit-Id: 14bf5615-6cb4-462b-af3e-42204698d4f7
	I1025 18:02:39.422758   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:39.422763   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:39.422768   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:39.422855   70293 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-971000","namespace":"kube-system","uid":"411ae656-7e8b-4e4e-892e-9873855be79f","resourceVersion":"304","creationTimestamp":"2023-10-26T01:02:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"666bb44a088f2de4036212af9c22245b","kubernetes.io/config.mirror":"666bb44a088f2de4036212af9c22245b","kubernetes.io/config.seen":"2023-10-26T01:02:09.640589778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-26T01:02:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I1025 18:02:39.617159   70293 request.go:629] Waited for 194.033629ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:39.617206   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes/multinode-971000
	I1025 18:02:39.617214   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:39.617225   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:39.617245   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:39.621485   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:39.621496   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:39.621502   70293 round_trippers.go:580]     Audit-Id: 257e5928-735b-42a8-9b44-9c8eab2e5e7e
	I1025 18:02:39.621507   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:39.621512   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:39.621517   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:39.621521   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:39.621527   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:39 GMT
	I1025 18:02:39.621578   70293 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-26T01:02:06Z","fieldsType":"FieldsV1","fi [truncated 4787 chars]
	I1025 18:02:39.621770   70293 pod_ready.go:92] pod "kube-scheduler-multinode-971000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:02:39.621778   70293 pod_ready.go:81] duration metric: took 399.442528ms waiting for pod "kube-scheduler-multinode-971000" in "kube-system" namespace to be "Ready" ...
	I1025 18:02:39.621786   70293 pod_ready.go:38] duration metric: took 1.201078165s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 18:02:39.621798   70293 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 18:02:39.621853   70293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:02:39.633961   70293 system_svc.go:56] duration metric: took 12.158408ms WaitForService to wait for kubelet.
	I1025 18:02:39.633976   70293 kubeadm.go:581] duration metric: took 1.377520239s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 18:02:39.633991   70293 node_conditions.go:102] verifying NodePressure condition ...
	I1025 18:02:39.816808   70293 request.go:629] Waited for 182.752376ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:57083/api/v1/nodes
	I1025 18:02:39.816992   70293 round_trippers.go:463] GET https://127.0.0.1:57083/api/v1/nodes
	I1025 18:02:39.817005   70293 round_trippers.go:469] Request Headers:
	I1025 18:02:39.817046   70293 round_trippers.go:473]     Accept: application/json, */*
	I1025 18:02:39.817074   70293 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1025 18:02:39.821319   70293 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1025 18:02:39.821330   70293 round_trippers.go:577] Response Headers:
	I1025 18:02:39.821336   70293 round_trippers.go:580]     Audit-Id: 810c9e8f-2e12-4eb2-9102-a5a5617acf1e
	I1025 18:02:39.821340   70293 round_trippers.go:580]     Cache-Control: no-cache, private
	I1025 18:02:39.821345   70293 round_trippers.go:580]     Content-Type: application/json
	I1025 18:02:39.821354   70293 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 154ff041-74d2-4bb1-b603-b04e6bf52a63
	I1025 18:02:39.821360   70293 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b42ac508-ac00-440c-991e-1440d6e59d3f
	I1025 18:02:39.821364   70293 round_trippers.go:580]     Date: Thu, 26 Oct 2023 01:02:39 GMT
	I1025 18:02:39.821451   70293 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"495"},"items":[{"metadata":{"name":"multinode-971000","uid":"7b6a56ef-f5f0-4955-8535-45acba6b4ed2","resourceVersion":"435","creationTimestamp":"2023-10-26T01:02:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-971000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"260f728c67096e5c74725dd26fc91a3a236708fc","minikube.k8s.io/name":"multinode-971000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_25T18_02_10_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9848 chars]
	I1025 18:02:39.821748   70293 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:02:39.821756   70293 node_conditions.go:123] node cpu capacity is 12
	I1025 18:02:39.821762   70293 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:02:39.821765   70293 node_conditions.go:123] node cpu capacity is 12
	I1025 18:02:39.821768   70293 node_conditions.go:105] duration metric: took 187.768133ms to run NodePressure ...
	I1025 18:02:39.821776   70293 start.go:228] waiting for startup goroutines ...
	I1025 18:02:39.821798   70293 start.go:242] writing updated cluster config ...
	I1025 18:02:39.822113   70293 ssh_runner.go:195] Run: rm -f paused
	I1025 18:02:39.867123   70293 start.go:600] kubectl: 1.27.2, cluster: 1.28.3 (minor skew: 1)
	I1025 18:02:39.909318   70293 out.go:177] * Done! kubectl is now configured to use "multinode-971000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Start docker client with request timeout 0s"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Loaded network plugin cni"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Docker cri networking managed by network plugin cni"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Docker Info: &{ID:f3d51850-6481-4bd0-a266-f12fa811602f Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:35 SystemTime:2023-10-26T01:01:58.954420413Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 KernelVersion:6.4.16-linuxkit OperatingSystem:Ubu
ntu 22.04.3 LTS OSVersion:22.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0006a61c0 NCPU:12 MemTotal:6227828736 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:multinode-971000 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroupns] ProductLicense: De
faultAddressPools:[] Warnings:[]}"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Setting cgroupDriver cgroupfs"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Oct 26 01:01:58 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:01:58Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 26 01:01:58 multinode-971000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Oct 26 01:02:04 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7ac42f2dcdad432eeb1e3756741a67375cb1d90e4816307794c7394b4e227576/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:04 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b9a8f2c969a514c208eb4209b96996da1f8b6058ab259c3f9567cd53b38a9bc9/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:04 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eeb64dfb3ded12aafcdf6082b1851fa87b041561ab92188a28179968aacbc81e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:04 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/279847d786425922090b228d6893db6ab1baeef70f9ca157677c80a5c8f13b48/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:23 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/53b2db21418a13bed7b201ee288a10c8cedf3987ab476aa1f2b977752337a6c5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:23 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5820da8798d68fc61909468c96e71830a39cc52b26a3e09f5d3dbe3f059f9ece/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:23 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/088e2f49585df3edcd504b3af3f4f591dfaca61ed0fcdce6b37853c9d6eb7c58/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:23 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:23Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-vm8jw_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Oct 26 01:02:24 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c78173724764d9514a55d77e0b4dabc16c5a8d6bb7b5f03ddb8c09abc4613ba6/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:02:24 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:24Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5dd5756b68-vm8jw_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Oct 26 01:02:27 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:27Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20230809-80a64d96: Status: Downloaded newer image for kindest/kindnetd:v20230809-80a64d96"
	Oct 26 01:02:30 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:30Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Oct 26 01:02:36 multinode-971000 dockerd[1064]: time="2023-10-26T01:02:36.884209606Z" level=info msg="ignoring event" container=130896ac2d7b00a2517546fb70a32433c2451bd66c4491817c492c3542273ff8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 01:02:36 multinode-971000 dockerd[1064]: time="2023-10-26T01:02:36.964615012Z" level=info msg="ignoring event" container=5820da8798d68fc61909468c96e71830a39cc52b26a3e09f5d3dbe3f059f9ece module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 01:02:37 multinode-971000 cri-dockerd[1290]: time="2023-10-26T01:02:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1456a363c70fddfeaf26cd15a109fe25dd4a1bd1c81cb1e664c199ab513049b2/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	66e5de028885f       ead0a4a53df89                                                                              About a minute ago   Running             coredns                   1                   1456a363c70fd       coredns-5dd5756b68-vm8jw
	3e63c379e41a3       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052   About a minute ago   Running             kindnet-cni               0                   088e2f49585df       kindnet-5txks
	3a56b85798430       6e38f40d628db                                                                              About a minute ago   Running             storage-provisioner       0                   c78173724764d       storage-provisioner
	130896ac2d7b0       ead0a4a53df89                                                                              About a minute ago   Exited              coredns                   0                   5820da8798d68       coredns-5dd5756b68-vm8jw
	5c53a7a668f95       bfc896cf80fba                                                                              About a minute ago   Running             kube-proxy                0                   53b2db21418a1       kube-proxy-2dzxx
	30d1ff6804721       6d1b4fd1b182d                                                                              2 minutes ago        Running             kube-scheduler            0                   279847d786425       kube-scheduler-multinode-971000
	d8ee2e0d080d2       5374347291230                                                                              2 minutes ago        Running             kube-apiserver            0                   eeb64dfb3ded1       kube-apiserver-multinode-971000
	4b2e003897a9a       73deb9a3f7025                                                                              2 minutes ago        Running             etcd                      0                   7ac42f2dcdad4       etcd-multinode-971000
	d755633ba4432       10baa1ca17068                                                                              2 minutes ago        Running             kube-controller-manager   0                   b9a8f2c969a51       kube-controller-manager-multinode-971000
	
	* 
	* ==> coredns [130896ac2d7b] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 7231284757438160731.4679241085068702325. HINFO: dial udp 192.168.65.254:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 7231284757438160731.4679241085068702325. HINFO: dial udp 192.168.65.254:53: connect: network is unreachable
	
	* 
	* ==> coredns [66e5de028885] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37276 - 37745 "HINFO IN 1263933810490036710.2232372054731377199. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008400338s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-971000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-971000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=multinode-971000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T18_02_10_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Oct 2023 01:02:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-971000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 26 Oct 2023 01:04:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Oct 2023 01:02:40 +0000   Thu, 26 Oct 2023 01:02:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Oct 2023 01:02:40 +0000   Thu, 26 Oct 2023 01:02:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Oct 2023 01:02:40 +0000   Thu, 26 Oct 2023 01:02:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Oct 2023 01:02:40 +0000   Thu, 26 Oct 2023 01:02:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-971000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e7c45f7441348bea4dd9fd7902c5f60
	  System UUID:                5e7c45f7441348bea4dd9fd7902c5f60
	  Boot ID:                    97028b5e-c1fe-46d5-abb1-881a12fedf72
	  Kernel Version:             6.4.16-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-vm8jw                    100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     112s
	  kube-system                 etcd-multinode-971000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m5s
	  kube-system                 kindnet-5txks                               100m (0%!)(MISSING)     100m (0%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      112s
	  kube-system                 kube-apiserver-multinode-971000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-controller-manager-multinode-971000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-proxy-2dzxx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-scheduler-multinode-971000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (7%!)(MISSING)   100m (0%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 110s                   kube-proxy       
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node multinode-971000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node multinode-971000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x7 over 2m11s)  kubelet          Node multinode-971000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node multinode-971000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node multinode-971000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node multinode-971000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           113s                   node-controller  Node multinode-971000 event: Registered Node multinode-971000 in Controller
	
	
	Name:               multinode-971000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-971000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Oct 2023 01:02:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-971000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 26 Oct 2023 01:04:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Oct 2023 01:03:07 +0000   Thu, 26 Oct 2023 01:02:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Oct 2023 01:03:07 +0000   Thu, 26 Oct 2023 01:02:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Oct 2023 01:03:07 +0000   Thu, 26 Oct 2023 01:02:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Oct 2023 01:03:07 +0000   Thu, 26 Oct 2023 01:02:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-971000-m02
	Capacity:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	System Info:
	  Machine ID:                 5bcaa049ff3e4818bcbee689f3319ded
	  System UUID:                5bcaa049ff3e4818bcbee689f3319ded
	  Boot ID:                    97028b5e-c1fe-46d5-abb1-881a12fedf72
	  Kernel Version:             6.4.16-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2z4jl       100m (0%!)(MISSING)     100m (0%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      98s
	  kube-system                 kube-proxy-qbx49    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (0%!)(MISSING)  100m (0%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 95s                kube-proxy       
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s (x2 over 98s)  kubelet          Node multinode-971000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x2 over 98s)  kubelet          Node multinode-971000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x2 over 98s)  kubelet          Node multinode-971000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                97s                kubelet          Node multinode-971000-m02 status is now: NodeReady
	  Normal  RegisteredNode           93s                node-controller  Node multinode-971000-m02 event: Registered Node multinode-971000-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.002920] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:07.0: PCI INT A: no GSI
	[  +0.002075] virtio-pci 0000:00:08.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:08.0: PCI INT A: no GSI
	[  +0.004650] virtio-pci 0000:00:09.0: can't derive routing for PCI INT A
	[  +0.000002] virtio-pci 0000:00:09.0: PCI INT A: no GSI
	[  +0.005011] virtio-pci 0000:00:0a.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0a.0: PCI INT A: no GSI
	[  +0.001909] virtio-pci 0000:00:0b.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0b.0: PCI INT A: no GSI
	[  +0.005014] virtio-pci 0000:00:0c.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0c.0: PCI INT A: no GSI
	[  +0.000255] virtio-pci 0000:00:0d.0: can't derive routing for PCI INT A
	[  +0.000000] virtio-pci 0000:00:0d.0: PCI INT A: no GSI
	[  +0.003210] virtio-pci 0000:00:0e.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0e.0: PCI INT A: no GSI
	[  +0.007936] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
	[  +0.025214] lpc_ich 0000:00:1f.0: No MFD cells added
	[  +0.006812] fail to initialize ptp_kvm
	[  +0.000001] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.756658] netlink: 'rc.init': attribute type 22 has an invalid length.
	[  +0.007092] 3[378]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.199399] FAT-fs (loop0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
	[  +0.000376] FAT-fs (loop0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
	[  +0.016213] grpcfuse: loading out-of-tree module taints kernel.
	
	* 
	* ==> etcd [4b2e003897a9] <==
	* {"level":"info","ts":"2023-10-26T01:02:04.535395Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-10-26T01:02:04.536334Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-26T01:02:04.53654Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-26T01:02:04.536619Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-26T01:02:04.536567Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-26T01:02:04.536638Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-26T01:02:05.058367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-26T01:02:05.058488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-26T01:02:05.058499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-10-26T01:02:05.058507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-10-26T01:02:05.058511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-26T01:02:05.058516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-10-26T01:02:05.058521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-26T01:02:05.059657Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:02:05.060254Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-971000 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-26T01:02:05.060331Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T01:02:05.060496Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:02:05.060588Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:02:05.060603Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:02:05.060702Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T01:02:05.060776Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-26T01:02:05.060787Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-26T01:02:05.06123Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-26T01:02:05.062073Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-10-26T01:02:28.79151Z","caller":"traceutil/trace.go:171","msg":"trace[1546140171] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"124.845392ms","start":"2023-10-26T01:02:28.666654Z","end":"2023-10-26T01:02:28.791499Z","steps":["trace[1546140171] 'process raft request'  (duration: 124.685634ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:04:14 up 26 min,  0 users,  load average: 0.31, 0.66, 0.52
	Linux multinode-971000 6.4.16-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Oct 10 20:42:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [3e63c379e41a] <==
	* I1026 01:03:08.864450       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	I1026 01:03:18.869192       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:03:18.869228       1 main.go:227] handling current node
	I1026 01:03:18.869236       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:03:18.869240       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	I1026 01:03:28.882106       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:03:28.882144       1 main.go:227] handling current node
	I1026 01:03:28.882151       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:03:28.882155       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	I1026 01:03:38.895411       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:03:38.895470       1 main.go:227] handling current node
	I1026 01:03:38.934157       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:03:38.934202       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	I1026 01:03:48.945857       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:03:48.945894       1 main.go:227] handling current node
	I1026 01:03:48.945902       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:03:48.945906       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	I1026 01:03:58.952508       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:03:58.952748       1 main.go:227] handling current node
	I1026 01:03:58.952757       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:03:58.952761       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	I1026 01:04:08.959689       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1026 01:04:08.959737       1 main.go:227] handling current node
	I1026 01:04:08.959747       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1026 01:04:08.959752       1 main.go:250] Node multinode-971000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [d8ee2e0d080d] <==
	* I1026 01:02:06.667595       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 01:02:06.667600       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 01:02:06.667603       1 cache.go:39] Caches are synced for autoregister controller
	I1026 01:02:06.731440       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 01:02:06.731495       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 01:02:06.734054       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 01:02:06.734671       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 01:02:06.734906       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 01:02:06.736126       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 01:02:06.830954       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 01:02:07.572411       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 01:02:07.575613       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 01:02:07.575653       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 01:02:07.932220       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 01:02:07.965074       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 01:02:08.043435       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 01:02:08.048235       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1026 01:02:08.049089       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 01:02:08.054001       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 01:02:08.647392       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 01:02:09.537608       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 01:02:09.548531       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 01:02:09.556011       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1026 01:02:22.138953       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1026 01:02:22.339987       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [d755633ba443] <==
	* I1026 01:02:22.351811       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2dzxx"
	I1026 01:02:22.352459       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5txks"
	I1026 01:02:22.444508       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-cvn82"
	I1026 01:02:22.450977       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vm8jw"
	I1026 01:02:22.547506       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="403.514449ms"
	I1026 01:02:22.552676       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-cvn82"
	I1026 01:02:22.636125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.565687ms"
	I1026 01:02:22.644234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.027134ms"
	I1026 01:02:22.644378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.693µs"
	I1026 01:02:22.649665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.972µs"
	I1026 01:02:22.742607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.122µs"
	I1026 01:02:24.599987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.21µs"
	I1026 01:02:24.617497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.405µs"
	I1026 01:02:24.621964       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="222.101µs"
	I1026 01:02:24.623683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.096µs"
	I1026 01:02:36.782223       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-971000-m02\" does not exist"
	I1026 01:02:36.789359       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-971000-m02" podCIDRs=["10.244.1.0/24"]
	I1026 01:02:36.793648       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2z4jl"
	I1026 01:02:36.795893       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qbx49"
	I1026 01:02:37.132496       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-971000-m02"
	I1026 01:02:37.740130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.148µs"
	I1026 01:02:37.755415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.493171ms"
	I1026 01:02:37.755569       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.192µs"
	I1026 01:02:41.489426       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-971000-m02"
	I1026 01:02:41.489509       1 event.go:307] "Event occurred" object="multinode-971000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-971000-m02 event: Registered Node multinode-971000-m02 in Controller"
	
	* 
	* ==> kube-proxy [5c53a7a668f9] <==
	* I1026 01:02:23.748003       1 server_others.go:69] "Using iptables proxy"
	I1026 01:02:23.832149       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1026 01:02:23.859791       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 01:02:23.862765       1 server_others.go:152] "Using iptables Proxier"
	I1026 01:02:23.862902       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 01:02:23.862916       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 01:02:23.862945       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 01:02:23.864454       1 server.go:846] "Version info" version="v1.28.3"
	I1026 01:02:23.864508       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 01:02:23.866632       1 config.go:188] "Starting service config controller"
	I1026 01:02:23.866663       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 01:02:23.866675       1 config.go:97] "Starting endpoint slice config controller"
	I1026 01:02:23.866688       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 01:02:23.866811       1 config.go:315] "Starting node config controller"
	I1026 01:02:23.866819       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 01:02:23.967434       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 01:02:23.967496       1 shared_informer.go:318] Caches are synced for service config
	I1026 01:02:23.967513       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [30d1ff680472] <==
	* E1026 01:02:06.652586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 01:02:06.652586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1026 01:02:06.652963       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1026 01:02:06.653022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 01:02:06.653030       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1026 01:02:06.653034       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1026 01:02:06.730202       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1026 01:02:06.730419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1026 01:02:06.735298       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 01:02:06.735430       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1026 01:02:06.735613       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 01:02:06.735755       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1026 01:02:06.735463       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 01:02:06.735948       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1026 01:02:06.735339       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 01:02:06.736002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1026 01:02:07.635943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 01:02:07.635973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1026 01:02:07.692474       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1026 01:02:07.692533       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 01:02:07.773403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 01:02:07.773450       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1026 01:02:07.775577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1026 01:02:07.775615       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1026 01:02:09.349385       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 26 01:02:22 multinode-971000 kubelet[2474]: I1026 01:02:22.544989    2474 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8stq\" (UniqueName: \"kubernetes.io/projected/8747ca8b-8044-46a8-a5bd-700e0fb6ceb8-kube-api-access-b8stq\") pod \"coredns-5dd5756b68-vm8jw\" (UID: \"8747ca8b-8044-46a8-a5bd-700e0fb6ceb8\") " pod="kube-system/coredns-5dd5756b68-vm8jw"
	Oct 26 01:02:22 multinode-971000 kubelet[2474]: I1026 01:02:22.545092    2474 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b00548f2-a206-488a-9e2b-45f2e1066597-config-volume\") pod \"coredns-5dd5756b68-cvn82\" (UID: \"b00548f2-a206-488a-9e2b-45f2e1066597\") " pod="kube-system/coredns-5dd5756b68-cvn82"
	Oct 26 01:02:22 multinode-971000 kubelet[2474]: I1026 01:02:22.545367    2474 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbtqv\" (UniqueName: \"kubernetes.io/projected/b00548f2-a206-488a-9e2b-45f2e1066597-kube-api-access-vbtqv\") pod \"coredns-5dd5756b68-cvn82\" (UID: \"b00548f2-a206-488a-9e2b-45f2e1066597\") " pod="kube-system/coredns-5dd5756b68-cvn82"
	Oct 26 01:02:22 multinode-971000 kubelet[2474]: I1026 01:02:22.545425    2474 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8747ca8b-8044-46a8-a5bd-700e0fb6ceb8-config-volume\") pod \"coredns-5dd5756b68-vm8jw\" (UID: \"8747ca8b-8044-46a8-a5bd-700e0fb6ceb8\") " pod="kube-system/coredns-5dd5756b68-vm8jw"
	Oct 26 01:02:22 multinode-971000 kubelet[2474]: E1026 01:02:22.553589    2474 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[config-volume kube-api-access-vbtqv], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/coredns-5dd5756b68-cvn82" podUID="b00548f2-a206-488a-9e2b-45f2e1066597"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.532489    2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5820da8798d68fc61909468c96e71830a39cc52b26a3e09f5d3dbe3f059f9ece"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.537667    2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="088e2f49585df3edcd504b3af3f4f591dfaca61ed0fcdce6b37853c9d6eb7c58"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.562159    2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53b2db21418a13bed7b201ee288a10c8cedf3987ab476aa1f2b977752337a6c5"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.655597    2474 topology_manager.go:215] "Topology Admit Handler" podUID="8a6d679a-a32e-4707-ad40-063155cf0cde" podNamespace="kube-system" podName="storage-provisioner"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.657047    2474 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b00548f2-a206-488a-9e2b-45f2e1066597-config-volume\") pod \"b00548f2-a206-488a-9e2b-45f2e1066597\" (UID: \"b00548f2-a206-488a-9e2b-45f2e1066597\") "
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.657172    2474 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbtqv\" (UniqueName: \"kubernetes.io/projected/b00548f2-a206-488a-9e2b-45f2e1066597-kube-api-access-vbtqv\") pod \"b00548f2-a206-488a-9e2b-45f2e1066597\" (UID: \"b00548f2-a206-488a-9e2b-45f2e1066597\") "
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.658164    2474 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b00548f2-a206-488a-9e2b-45f2e1066597-config-volume" (OuterVolumeSpecName: "config-volume") pod "b00548f2-a206-488a-9e2b-45f2e1066597" (UID: "b00548f2-a206-488a-9e2b-45f2e1066597"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.662384    2474 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b00548f2-a206-488a-9e2b-45f2e1066597-kube-api-access-vbtqv" (OuterVolumeSpecName: "kube-api-access-vbtqv") pod "b00548f2-a206-488a-9e2b-45f2e1066597" (UID: "b00548f2-a206-488a-9e2b-45f2e1066597"). InnerVolumeSpecName "kube-api-access-vbtqv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.757379    2474 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8a6d679a-a32e-4707-ad40-063155cf0cde-tmp\") pod \"storage-provisioner\" (UID: \"8a6d679a-a32e-4707-ad40-063155cf0cde\") " pod="kube-system/storage-provisioner"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.757432    2474 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jdcn\" (UniqueName: \"kubernetes.io/projected/8a6d679a-a32e-4707-ad40-063155cf0cde-kube-api-access-8jdcn\") pod \"storage-provisioner\" (UID: \"8a6d679a-a32e-4707-ad40-063155cf0cde\") " pod="kube-system/storage-provisioner"
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.757457    2474 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b00548f2-a206-488a-9e2b-45f2e1066597-config-volume\") on node \"multinode-971000\" DevicePath \"\""
	Oct 26 01:02:23 multinode-971000 kubelet[2474]: I1026 01:02:23.757466    2474 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vbtqv\" (UniqueName: \"kubernetes.io/projected/b00548f2-a206-488a-9e2b-45f2e1066597-kube-api-access-vbtqv\") on node \"multinode-971000\" DevicePath \"\""
	Oct 26 01:02:24 multinode-971000 kubelet[2474]: I1026 01:02:24.579290    2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.579264685 podCreationTimestamp="2023-10-26 01:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-26 01:02:24.579090341 +0000 UTC m=+15.072789446" watchObservedRunningTime="2023-10-26 01:02:24.579264685 +0000 UTC m=+15.072963785"
	Oct 26 01:02:24 multinode-971000 kubelet[2474]: I1026 01:02:24.600086    2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vm8jw" podStartSLOduration=2.600036384 podCreationTimestamp="2023-10-26 01:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-26 01:02:24.599486756 +0000 UTC m=+15.093185861" watchObservedRunningTime="2023-10-26 01:02:24.600036384 +0000 UTC m=+15.093735493"
	Oct 26 01:02:24 multinode-971000 kubelet[2474]: I1026 01:02:24.609787    2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2dzxx" podStartSLOduration=2.609758703 podCreationTimestamp="2023-10-26 01:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-26 01:02:24.609591148 +0000 UTC m=+15.103290254" watchObservedRunningTime="2023-10-26 01:02:24.609758703 +0000 UTC m=+15.103457808"
	Oct 26 01:02:25 multinode-971000 kubelet[2474]: I1026 01:02:25.671706    2474 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b00548f2-a206-488a-9e2b-45f2e1066597" path="/var/lib/kubelet/pods/b00548f2-a206-488a-9e2b-45f2e1066597/volumes"
	Oct 26 01:02:30 multinode-971000 kubelet[2474]: I1026 01:02:30.070510    2474 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 26 01:02:30 multinode-971000 kubelet[2474]: I1026 01:02:30.071355    2474 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 01:02:37 multinode-971000 kubelet[2474]: I1026 01:02:37.731281    2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5820da8798d68fc61909468c96e71830a39cc52b26a3e09f5d3dbe3f059f9ece"
	Oct 26 01:02:37 multinode-971000 kubelet[2474]: I1026 01:02:37.740313    2474 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-5txks" podStartSLOduration=11.602141 podCreationTimestamp="2023-10-26 01:02:22 +0000 UTC" firstStartedPulling="2023-10-26 01:02:23.537351308 +0000 UTC m=+14.031050404" lastFinishedPulling="2023-10-26 01:02:27.676459274 +0000 UTC m=+18.169193335" observedRunningTime="2023-10-26 01:02:28.792995481 +0000 UTC m=+19.285729539" watchObservedRunningTime="2023-10-26 01:02:37.740283931 +0000 UTC m=+28.233017993"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-971000 -n multinode-971000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-971000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (65.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.300085893.exe start -p running-upgrade-163000 --memory=2200 --vm-driver=docker 
E1025 18:16:51.119957   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 18:17:35.254265   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.300085893.exe start -p running-upgrade-163000 --memory=2200 --vm-driver=docker : exit status 70 (50.886611373s)

                                                
                                                
-- stdout --
	! [running-upgrade-163000] minikube v1.9.0 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1968436348
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5939MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:17:21.619882393 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-163000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5939MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:17:36.535135523 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-163000", then "minikube start -p running-upgrade-163000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.31.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.31.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 4.00 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.00 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 22.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 31.14 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 46.86 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 57.58 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 64.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 73.41 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 85.17 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 95.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 103.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 111.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 118.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 125.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 128.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 135.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 142.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 148.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 155.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 166.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 173.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 179.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 185.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 192.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 199.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 204.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 211.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 217.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 222.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 227.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 233.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 237.29 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 242.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 249.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 251.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 256.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 262.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 269.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 272.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 280.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 288.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 296.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 304.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 308.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 312.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 320.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 328.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 344.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 358.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 370.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 383.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 397.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 412.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 432.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 441.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 456.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 461.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 471.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 480.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 500.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 508.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 520.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 531.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:17:36.535135523 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.300085893.exe start -p running-upgrade-163000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:133: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.300085893.exe start -p running-upgrade-163000 --memory=2200 --vm-driver=docker : exit status 70 (4.02524506s)

                                                
                                                
-- stdout --
	* [running-upgrade-163000] minikube v1.9.0 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1538827994
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-163000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.300085893.exe start -p running-upgrade-163000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:133: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.300085893.exe start -p running-upgrade-163000 --memory=2200 --vm-driver=docker : exit status 70 (3.952406763s)

                                                
                                                
-- stdout --
	* [running-upgrade-163000] minikube v1.9.0 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig844125045
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-163000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:139: legacy v1.9.0 start failed: exit status 70
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-25 18:17:49.734067 -0700 PDT m=+2346.790547821
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-163000
helpers_test.go:235: (dbg) docker inspect running-upgrade-163000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92b48ae3983ea42bf99e48bbd99cadc8517b84f76a15694c2996ebbd95c7a6d2",
	        "Created": "2023-10-26T01:17:30.081662693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195594,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:17:30.296545138Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/92b48ae3983ea42bf99e48bbd99cadc8517b84f76a15694c2996ebbd95c7a6d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92b48ae3983ea42bf99e48bbd99cadc8517b84f76a15694c2996ebbd95c7a6d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/92b48ae3983ea42bf99e48bbd99cadc8517b84f76a15694c2996ebbd95c7a6d2/hosts",
	        "LogPath": "/var/lib/docker/containers/92b48ae3983ea42bf99e48bbd99cadc8517b84f76a15694c2996ebbd95c7a6d2/92b48ae3983ea42bf99e48bbd99cadc8517b84f76a15694c2996ebbd95c7a6d2-json.log",
	        "Name": "/running-upgrade-163000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-163000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0545f521b38c9fe845a6cc9472f6d0b2746db1abab8480f0f1830b2c355a3f38-init/diff:/var/lib/docker/overlay2/d6672e613bb02bd7dd14300293f45cb78de98f1c7128082a2421ae037c0b13ec/diff:/var/lib/docker/overlay2/e2e73ca3080d9c529ffb10f2eea67603eb3dbcc6cb2535d3aace97e3693da9eb/diff:/var/lib/docker/overlay2/6af9671d3bbbabae727d23cdccb7d7daae0c709c407987827719699890b7a6e1/diff:/var/lib/docker/overlay2/1a430d4a29ae2363c762630bd97f48ae20b6d710481ac1fa15b9f31dfa6d99dc/diff:/var/lib/docker/overlay2/d5d3741d8008f10485f4663974a0e05286905dfc543d2865b3eb3dd189c2c0cd/diff:/var/lib/docker/overlay2/ac89e51629d1b778a6631ef623aa50bed1a54a8a272129557acfb260d052eb8a/diff:/var/lib/docker/overlay2/94cd1d40cd045b909ad583db3b34774f8174f2c4ef53751a3d62f881993e5a99/diff:/var/lib/docker/overlay2/516eea8fbd9f85f0f54038149fb8cda86e5f02567a88cde900feaa6120a631c1/diff:/var/lib/docker/overlay2/214b948f1ddde9a13a6dde4c9a13be42d1509e34ee5fd01b40bf65b1011b0d04/diff:/var/lib/docker/overlay2/5a9940
759548cf8f0d426d4c517e4b130a4d13f6bb7ebf79c939d6cd431da03c/diff:/var/lib/docker/overlay2/99ef3c12061c77b4378da50b5459c471630e8cbc30261f3ee769b90f17e447ad/diff:/var/lib/docker/overlay2/3f0b8f3d987df41619addaa9e3f2c3a084dfba202fcab8ef717e78cdb343672d/diff:/var/lib/docker/overlay2/7a16469da950e1a384c3e8d34d8e5e576bca76b02dd97ff172ed4c76147da020/diff:/var/lib/docker/overlay2/60a369390ac647a09ba1e0700e212285f29c8c5d9d7d153c1ff4495e6d5d4b68/diff:/var/lib/docker/overlay2/c4b15ba87e225248094d159cf593fb0b46304b0ee354d8161d37e00fd058d880/diff:/var/lib/docker/overlay2/037edf613fce2c2111e172c7f106e5364a4fd3ef227dd6496d9ca921dec30b06/diff:/var/lib/docker/overlay2/3fa60cf93f361d3f2de355a1c9c2a039292a0979a271b8147baa807469f7640d/diff:/var/lib/docker/overlay2/24a747d83169d0b648ca52b3aa6592463599595264c6adb513fd00cc1a6b8faa/diff:/var/lib/docker/overlay2/cb0ecb3ac56d83a7bc7d261856f61807e581c04980dab3dca511afd2b91cb6ad/diff:/var/lib/docker/overlay2/e53375eb16e3e671322acb01d14c7ba5ecd0572795f0b8000bdd8e32a87a1e18/diff:/var/lib/d
ocker/overlay2/1575a1bcceee782fd6cca7631af847096b6ddd72b2a4f5ca475742e01849c96b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0545f521b38c9fe845a6cc9472f6d0b2746db1abab8480f0f1830b2c355a3f38/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0545f521b38c9fe845a6cc9472f6d0b2746db1abab8480f0f1830b2c355a3f38/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0545f521b38c9fe845a6cc9472f6d0b2746db1abab8480f0f1830b2c355a3f38/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-163000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-163000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-163000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-163000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-163000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74aa6b4a00290b34288315a1d9f44499a64717301cac62fb28805823c4e246aa",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58020"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58021"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58022"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/74aa6b4a0029",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "0b1ef1a2cc39f9d962f896d251596bebdc3c5b275131c396d8884ccae931d9a7",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "17c5f4f69c914df0ca6389ad117bc7d5b2743fb9e93a6609b3f776160dc635c5",
	                    "EndpointID": "0b1ef1a2cc39f9d962f896d251596bebdc3c5b275131c396d8884ccae931d9a7",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-163000 -n running-upgrade-163000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-163000 -n running-upgrade-163000: exit status 6 (368.210902ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:17:50.144428   74560 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-163000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-163000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-163000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-163000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-163000: (2.318658534s)
--- FAIL: TestRunningBinaryUpgrade (65.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (576.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-401000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-401000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m16.822677948s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-401000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-401000 in cluster kubernetes-upgrade-401000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:18:41.458305   74918 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:18:41.458487   74918 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:18:41.458492   74918 out.go:309] Setting ErrFile to fd 2...
	I1025 18:18:41.458496   74918 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:18:41.458689   74918 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 18:18:41.460053   74918 out.go:303] Setting JSON to false
	I1025 18:18:41.482055   74918 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":33489,"bootTime":1698249632,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 18:18:41.482160   74918 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 18:18:41.504347   74918 out.go:177] * [kubernetes-upgrade-401000] minikube v1.31.2 on Darwin 14.0
	I1025 18:18:41.545969   74918 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 18:18:41.546074   74918 notify.go:220] Checking for updates...
	I1025 18:18:41.589959   74918 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:18:41.610994   74918 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 18:18:41.631874   74918 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:18:41.653023   74918 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 18:18:41.674046   74918 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:18:41.695863   74918 config.go:182] Loaded profile config "cert-expiration-531000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:18:41.696045   74918 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 18:18:41.753920   74918 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 18:18:41.754058   74918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:18:41.857395   74918 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 01:18:41.846371999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:18:41.899991   74918 out.go:177] * Using the docker driver based on user configuration
	I1025 18:18:41.921114   74918 start.go:298] selected driver: docker
	I1025 18:18:41.921138   74918 start.go:902] validating driver "docker" against <nil>
	I1025 18:18:41.921151   74918 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:18:41.925802   74918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:18:42.028069   74918 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 01:18:42.015934676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:18:42.028267   74918 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 18:18:42.028460   74918 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 18:18:42.049776   74918 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 18:18:42.070691   74918 cni.go:84] Creating CNI manager for ""
	I1025 18:18:42.070724   74918 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 18:18:42.070747   74918 start_flags.go:323] config:
	{Name:kubernetes-upgrade-401000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:18:42.092496   74918 out.go:177] * Starting control plane node kubernetes-upgrade-401000 in cluster kubernetes-upgrade-401000
	I1025 18:18:42.113610   74918 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 18:18:42.134639   74918 out.go:177] * Pulling base image ...
	I1025 18:18:42.176520   74918 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 18:18:42.176599   74918 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 18:18:42.176619   74918 cache.go:56] Caching tarball of preloaded images
	I1025 18:18:42.176638   74918 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 18:18:42.176813   74918 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 18:18:42.176831   74918 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 18:18:42.176946   74918 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/config.json ...
	I1025 18:18:42.176981   74918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/config.json: {Name:mke94b4fbeef4368e36204da59a13b8144a9877a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:18:42.229528   74918 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 18:18:42.229556   74918 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 18:18:42.229577   74918 cache.go:194] Successfully downloaded all kic artifacts
	I1025 18:18:42.229623   74918 start.go:365] acquiring machines lock for kubernetes-upgrade-401000: {Name:mk6409086ac74878831c315bb785a33c1dba8141 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:18:42.229766   74918 start.go:369] acquired machines lock for "kubernetes-upgrade-401000" in 131.129µs
	I1025 18:18:42.229793   74918 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-401000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-401000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:18:42.229848   74918 start.go:125] createHost starting for "" (driver="docker")
	I1025 18:18:42.253353   74918 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 18:18:42.253747   74918 start.go:159] libmachine.API.Create for "kubernetes-upgrade-401000" (driver="docker")
	I1025 18:18:42.253792   74918 client.go:168] LocalClient.Create starting
	I1025 18:18:42.254023   74918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem
	I1025 18:18:42.254113   74918 main.go:141] libmachine: Decoding PEM data...
	I1025 18:18:42.254148   74918 main.go:141] libmachine: Parsing certificate...
	I1025 18:18:42.254237   74918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem
	I1025 18:18:42.254300   74918 main.go:141] libmachine: Decoding PEM data...
	I1025 18:18:42.254316   74918 main.go:141] libmachine: Parsing certificate...
	I1025 18:18:42.255058   74918 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-401000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 18:18:42.306236   74918 cli_runner.go:211] docker network inspect kubernetes-upgrade-401000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 18:18:42.306332   74918 network_create.go:281] running [docker network inspect kubernetes-upgrade-401000] to gather additional debugging logs...
	I1025 18:18:42.306349   74918 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-401000
	W1025 18:18:42.356833   74918 cli_runner.go:211] docker network inspect kubernetes-upgrade-401000 returned with exit code 1
	I1025 18:18:42.356871   74918 network_create.go:284] error running [docker network inspect kubernetes-upgrade-401000]: docker network inspect kubernetes-upgrade-401000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-401000 not found
	I1025 18:18:42.356895   74918 network_create.go:286] output of [docker network inspect kubernetes-upgrade-401000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-401000 not found
	
	** /stderr **
	I1025 18:18:42.357020   74918 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 18:18:42.409705   74918 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1025 18:18:42.410108   74918 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00225d0c0}
	I1025 18:18:42.410127   74918 network_create.go:124] attempt to create docker network kubernetes-upgrade-401000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1025 18:18:42.410207   74918 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-401000 kubernetes-upgrade-401000
	W1025 18:18:42.461125   74918 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-401000 kubernetes-upgrade-401000 returned with exit code 1
	W1025 18:18:42.461160   74918 network_create.go:149] failed to create docker network kubernetes-upgrade-401000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-401000 kubernetes-upgrade-401000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1025 18:18:42.461178   74918 network_create.go:116] failed to create docker network kubernetes-upgrade-401000 192.168.58.0/24, will retry: subnet is taken
	I1025 18:18:42.462807   74918 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1025 18:18:42.463187   74918 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002159cf0}
	I1025 18:18:42.463199   74918 network_create.go:124] attempt to create docker network kubernetes-upgrade-401000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1025 18:18:42.463269   74918 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-401000 kubernetes-upgrade-401000
	W1025 18:18:42.513822   74918 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-401000 kubernetes-upgrade-401000 returned with exit code 1
	W1025 18:18:42.513865   74918 network_create.go:149] failed to create docker network kubernetes-upgrade-401000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-401000 kubernetes-upgrade-401000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1025 18:18:42.513880   74918 network_create.go:116] failed to create docker network kubernetes-upgrade-401000 192.168.67.0/24, will retry: subnet is taken
	I1025 18:18:42.515324   74918 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1025 18:18:42.515711   74918 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00237aa90}
	I1025 18:18:42.515727   74918 network_create.go:124] attempt to create docker network kubernetes-upgrade-401000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1025 18:18:42.515789   74918 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-401000 kubernetes-upgrade-401000
	I1025 18:18:42.602845   74918 network_create.go:108] docker network kubernetes-upgrade-401000 192.168.76.0/24 created
	I1025 18:18:42.602882   74918 kic.go:118] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-401000" container
	I1025 18:18:42.602983   74918 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 18:18:42.654577   74918 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-401000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-401000 --label created_by.minikube.sigs.k8s.io=true
	I1025 18:18:42.705846   74918 oci.go:103] Successfully created a docker volume kubernetes-upgrade-401000
	I1025 18:18:42.705976   74918 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-401000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-401000 --entrypoint /usr/bin/test -v kubernetes-upgrade-401000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 18:18:43.170218   74918 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-401000
	I1025 18:18:43.170260   74918 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 18:18:43.170273   74918 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 18:18:43.170380   74918 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-401000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 18:18:45.914576   74918 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-401000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (2.744048201s)
	I1025 18:18:45.914599   74918 kic.go:200] duration metric: took 2.744247 seconds to extract preloaded images to volume
	I1025 18:18:45.914707   74918 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 18:18:46.015949   74918 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-401000 --name kubernetes-upgrade-401000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-401000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-401000 --network kubernetes-upgrade-401000 --ip 192.168.76.2 --volume kubernetes-upgrade-401000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 18:18:46.323108   74918 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-401000 --format={{.State.Running}}
	I1025 18:18:46.390540   74918 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-401000 --format={{.State.Status}}
	I1025 18:18:46.455322   74918 cli_runner.go:164] Run: docker exec kubernetes-upgrade-401000 stat /var/lib/dpkg/alternatives/iptables
	I1025 18:18:46.624772   74918 oci.go:144] the created container "kubernetes-upgrade-401000" has a running status.
	I1025 18:18:46.624812   74918 kic.go:222] Creating ssh key for kic: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa...
	I1025 18:18:46.774100   74918 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 18:18:46.841889   74918 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-401000 --format={{.State.Status}}
	I1025 18:18:46.904719   74918 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 18:18:46.904739   74918 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-401000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 18:18:47.012824   74918 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-401000 --format={{.State.Status}}
	I1025 18:18:47.072473   74918 machine.go:88] provisioning docker machine ...
	I1025 18:18:47.072512   74918 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-401000"
	I1025 18:18:47.072661   74918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:18:47.129795   74918 main.go:141] libmachine: Using SSH client type: native
	I1025 18:18:47.130146   74918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 58132 <nil> <nil>}
	I1025 18:18:47.130158   74918 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-401000 && echo "kubernetes-upgrade-401000" | sudo tee /etc/hostname
	I1025 18:18:47.264704   74918 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-401000
	
	I1025 18:18:47.264787   74918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:18:47.318085   74918 main.go:141] libmachine: Using SSH client type: native
	I1025 18:18:47.318401   74918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 58132 <nil> <nil>}
	I1025 18:18:47.318415   74918 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-401000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-401000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-401000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:18:47.441851   74918 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:18:47.441872   74918 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 18:18:47.441892   74918 ubuntu.go:177] setting up certificates
	I1025 18:18:47.441897   74918 provision.go:83] configureAuth start
	I1025 18:18:47.441981   74918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-401000
	I1025 18:18:47.493769   74918 provision.go:138] copyHostCerts
	I1025 18:18:47.493854   74918 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 18:18:47.493863   74918 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:18:47.494635   74918 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 18:18:47.494817   74918 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 18:18:47.494832   74918 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:18:47.494919   74918 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 18:18:47.495065   74918 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 18:18:47.495071   74918 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:18:47.495151   74918 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 18:18:47.495294   74918 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-401000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-401000]
	I1025 18:18:47.669201   74918 provision.go:172] copyRemoteCerts
	I1025 18:18:47.669255   74918 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:18:47.669309   74918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:18:47.720057   74918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58132 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa Username:docker}
	I1025 18:18:47.811059   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 18:18:47.834434   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 18:18:47.857454   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 18:18:47.880525   74918 provision.go:86] duration metric: configureAuth took 438.601444ms
	I1025 18:18:47.880540   74918 ubuntu.go:193] setting minikube options for container-runtime
	I1025 18:18:47.880676   74918 config.go:182] Loaded profile config "kubernetes-upgrade-401000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1025 18:18:47.880738   74918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:18:47.934508   74918 main.go:141] libmachine: Using SSH client type: native
	I1025 18:18:47.934795   74918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 58132 <nil> <nil>}
	I1025 18:18:47.934809   74918 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:18:48.056943   74918 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 18:18:48.056967   74918 ubuntu.go:71] root file system type: overlay
	I1025 18:18:48.057054   74918 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:18:48.057138   74918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:18:48.109648   74918 main.go:141] libmachine: Using SSH client type: native
	I1025 18:18:48.109929   74918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 58132 <nil> <nil>}
	I1025 18:18:48.109980   74918 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:18:48.241680   74918 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:18:48.241767   74918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:18:48.293884   74918 main.go:141] libmachine: Using SSH client type: native
	I1025 18:18:48.294177   74918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 58132 <nil> <nil>}
	I1025 18:18:48.294190   74918 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:18:48.915861   74918 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:18:48.237997198 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 18:18:48.915885   74918 machine.go:91] provisioned docker machine in 1.843338261s
	I1025 18:18:48.915893   74918 client.go:171] LocalClient.Create took 6.661904933s
	I1025 18:18:48.915910   74918 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-401000" took 6.661978506s
	I1025 18:18:48.915919   74918 start.go:300] post-start starting for "kubernetes-upgrade-401000" (driver="docker")
	I1025 18:18:48.915928   74918 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:18:48.915982   74918 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:18:48.916040   74918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:18:48.969080   74918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58132 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa Username:docker}
	I1025 18:18:49.059868   74918 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:18:49.064449   74918 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 18:18:49.064475   74918 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 18:18:49.064482   74918 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 18:18:49.064486   74918 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 18:18:49.064498   74918 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 18:18:49.064593   74918 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 18:18:49.064764   74918 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 18:18:49.064955   74918 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:18:49.074056   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:18:49.096389   74918 start.go:303] post-start completed in 180.45528ms
	I1025 18:18:49.096904   74918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-401000
	I1025 18:18:49.148772   74918 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/config.json ...
	I1025 18:18:49.149209   74918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 18:18:49.149282   74918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:18:49.200441   74918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58132 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa Username:docker}
	I1025 18:18:49.286894   74918 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 18:18:49.292312   74918 start.go:128] duration metric: createHost completed in 7.062248363s
	I1025 18:18:49.292341   74918 start.go:83] releasing machines lock for "kubernetes-upgrade-401000", held for 7.062365506s
	I1025 18:18:49.292463   74918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-401000
	I1025 18:18:49.344636   74918 ssh_runner.go:195] Run: cat /version.json
	I1025 18:18:49.344670   74918 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:18:49.344707   74918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:18:49.344756   74918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:18:49.401603   74918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58132 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa Username:docker}
	I1025 18:18:49.401598   74918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58132 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa Username:docker}
	I1025 18:18:49.590293   74918 ssh_runner.go:195] Run: systemctl --version
	I1025 18:18:49.595593   74918 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 18:18:49.600997   74918 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 18:18:49.626295   74918 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 18:18:49.626367   74918 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 18:18:49.643937   74918 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 18:18:49.660991   74918 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 18:18:49.661007   74918 start.go:472] detecting cgroup driver to use...
	I1025 18:18:49.661023   74918 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:18:49.661127   74918 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:18:49.677742   74918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1025 18:18:49.688734   74918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:18:49.699458   74918 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:18:49.699517   74918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:18:49.710176   74918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:18:49.721158   74918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:18:49.731801   74918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:18:49.742394   74918 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:18:49.752360   74918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:18:49.763273   74918 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:18:49.772689   74918 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:18:49.781883   74918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:18:49.841615   74918 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:18:49.925651   74918 start.go:472] detecting cgroup driver to use...
	I1025 18:18:49.925670   74918 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:18:49.925733   74918 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:18:49.944858   74918 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 18:18:49.944932   74918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:18:49.958159   74918 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:18:49.976771   74918 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:18:49.982354   74918 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:18:49.992577   74918 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:18:50.025918   74918 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:18:50.125533   74918 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:18:50.217502   74918 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:18:50.217588   74918 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:18:50.235069   74918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:18:50.293827   74918 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:18:50.624879   74918 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:18:50.652011   74918 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:18:50.720184   74918 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
	I1025 18:18:50.720263   74918 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-401000 dig +short host.docker.internal
	I1025 18:18:50.845225   74918 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 18:18:50.845312   74918 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 18:18:50.850282   74918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:18:50.862552   74918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:18:50.914649   74918 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 18:18:50.914720   74918 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:18:50.936888   74918 docker.go:693] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1025 18:18:50.936903   74918 docker.go:699] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I1025 18:18:50.936950   74918 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 18:18:50.946492   74918 ssh_runner.go:195] Run: which lz4
	I1025 18:18:50.951241   74918 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 18:18:50.955813   74918 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 18:18:50.955836   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I1025 18:18:56.260388   74918 docker.go:657] Took 5.309049 seconds to copy over tarball
	I1025 18:18:56.260451   74918 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 18:18:58.161623   74918 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.901103263s)
	I1025 18:18:58.161638   74918 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 18:18:58.210707   74918 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 18:18:58.221497   74918 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I1025 18:18:58.240262   74918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:18:58.301774   74918 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:18:58.804448   74918 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:18:58.825492   74918 docker.go:693] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1025 18:18:58.825507   74918 docker.go:699] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I1025 18:18:58.825514   74918 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 18:18:58.831589   74918 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:18:58.831591   74918 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1025 18:18:58.831884   74918 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:18:58.831966   74918 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:18:58.832048   74918 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1025 18:18:58.832259   74918 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:18:58.832366   74918 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:18:58.833652   74918 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1025 18:18:58.837934   74918 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:18:58.837976   74918 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1025 18:18:58.840140   74918 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:18:58.840141   74918 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:18:58.842091   74918 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:18:58.842244   74918 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:18:58.842249   74918 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1025 18:18:58.842301   74918 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1025 18:18:59.691137   74918 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:18:59.828055   74918 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1025 18:18:59.850442   74918 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1025 18:18:59.850481   74918 docker.go:318] Removing image: registry.k8s.io/pause:3.1
	I1025 18:18:59.850535   74918 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I1025 18:18:59.872177   74918 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1025 18:18:59.957715   74918 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:18:59.979100   74918 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1025 18:18:59.979144   74918 docker.go:318] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:18:59.979207   74918 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:19:00.000872   74918 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1025 18:19:00.281934   74918 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:19:00.304810   74918 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1025 18:19:00.304844   74918 docker.go:318] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:19:00.304894   74918 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:19:00.326682   74918 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1025 18:19:00.604221   74918 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:19:00.626437   74918 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1025 18:19:00.626472   74918 docker.go:318] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:19:00.626530   74918 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:19:00.647376   74918 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1025 18:19:00.951331   74918 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:19:00.974941   74918 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1025 18:19:00.974979   74918 docker.go:318] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:19:00.975046   74918 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:19:00.998700   74918 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1025 18:19:01.270445   74918 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1025 18:19:01.293929   74918 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1025 18:19:01.293969   74918 docker.go:318] Removing image: registry.k8s.io/coredns:1.6.2
	I1025 18:19:01.294034   74918 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I1025 18:19:01.316379   74918 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1025 18:19:01.574077   74918 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1025 18:19:01.595024   74918 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1025 18:19:01.595047   74918 docker.go:318] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1025 18:19:01.595106   74918 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I1025 18:19:01.614743   74918 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1025 18:19:01.614805   74918 cache_images.go:92] LoadImages completed in 2.789203221s
	W1025 18:19:01.614860   74918 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I1025 18:19:01.614934   74918 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:19:01.667324   74918 cni.go:84] Creating CNI manager for ""
	I1025 18:19:01.667340   74918 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 18:19:01.667356   74918 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 18:19:01.667373   74918 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-401000 NodeName:kubernetes-upgrade-401000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1025 18:19:01.667485   74918 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-401000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-401000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:19:01.667546   74918 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-401000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 18:19:01.667604   74918 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1025 18:19:01.677638   74918 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:19:01.677706   74918 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 18:19:01.687104   74918 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I1025 18:19:01.704176   74918 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:19:01.721316   74918 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I1025 18:19:01.738497   74918 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 18:19:01.743075   74918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:19:01.754791   74918 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000 for IP: 192.168.76.2
	I1025 18:19:01.754827   74918 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:19:01.754999   74918 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 18:19:01.755056   74918 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 18:19:01.755104   74918 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/client.key
	I1025 18:19:01.755118   74918 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/client.crt with IP's: []
	I1025 18:19:01.862988   74918 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/client.crt ...
	I1025 18:19:01.863002   74918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/client.crt: {Name:mk96b4c0f31368cb33e4cbc6db94226e8b8e7633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:19:01.863346   74918 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/client.key ...
	I1025 18:19:01.863361   74918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/client.key: {Name:mkd90fd467244b98129a0f4b2626a3a92588abe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:19:01.863583   74918 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.key.31bdca25
	I1025 18:19:01.863600   74918 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 18:19:01.955742   74918 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.crt.31bdca25 ...
	I1025 18:19:01.955753   74918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.crt.31bdca25: {Name:mkff2095bbac1fdad723d65bf3ef295dde4cb9f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:19:01.956011   74918 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.key.31bdca25 ...
	I1025 18:19:01.956019   74918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.key.31bdca25: {Name:mk7fa7b76cc15b8172efd40617a66624456f0b35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:19:01.956210   74918 certs.go:337] copying /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.crt
	I1025 18:19:01.956373   74918 certs.go:341] copying /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.key
	I1025 18:19:01.956520   74918 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/proxy-client.key
	I1025 18:19:01.956536   74918 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/proxy-client.crt with IP's: []
	I1025 18:19:02.260822   74918 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/proxy-client.crt ...
	I1025 18:19:02.260842   74918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/proxy-client.crt: {Name:mk804bdb64cc12ea71b816497db539ea038bd29b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:19:02.261150   74918 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/proxy-client.key ...
	I1025 18:19:02.261158   74918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/proxy-client.key: {Name:mk4df1047fbb56d5f911f904324b29045481d3a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:19:02.261549   74918 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 18:19:02.261598   74918 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 18:19:02.261610   74918 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 18:19:02.261646   74918 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 18:19:02.261677   74918 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:19:02.261706   74918 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 18:19:02.261767   74918 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:19:02.262285   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 18:19:02.286482   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 18:19:02.309572   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 18:19:02.332450   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 18:19:02.355375   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:19:02.378304   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 18:19:02.401365   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:19:02.425138   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:19:02.449588   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 18:19:02.473387   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 18:19:02.496807   74918 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:19:02.519924   74918 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 18:19:02.538102   74918 ssh_runner.go:195] Run: openssl version
	I1025 18:19:02.545575   74918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 18:19:02.556111   74918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 18:19:02.560607   74918 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:19:02.560662   74918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 18:19:02.567826   74918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 18:19:02.578156   74918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 18:19:02.588248   74918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 18:19:02.592852   74918 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:19:02.592902   74918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 18:19:02.600268   74918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:19:02.610333   74918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:19:02.620572   74918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:19:02.625174   74918 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:19:02.625226   74918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:19:02.632214   74918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:19:02.642093   74918 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 18:19:02.646637   74918 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 18:19:02.646680   74918 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-401000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-401000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:19:02.646801   74918 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:19:02.667459   74918 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 18:19:02.677537   74918 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:19:02.687281   74918 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 18:19:02.687335   74918 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:19:02.696987   74918 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:19:02.697021   74918 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 18:19:02.750084   74918 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1025 18:19:02.750141   74918 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 18:19:03.010821   74918 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:19:03.010925   74918 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:19:03.011017   74918 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:19:03.197024   74918 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:19:03.197862   74918 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:19:03.204681   74918 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1025 18:19:03.283750   74918 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:19:03.308027   74918 out.go:204]   - Generating certificates and keys ...
	I1025 18:19:03.308107   74918 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 18:19:03.308180   74918 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 18:19:03.469300   74918 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 18:19:03.636681   74918 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 18:19:03.796970   74918 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 18:19:03.988573   74918 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 18:19:04.211754   74918 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 18:19:04.211903   74918 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-401000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 18:19:04.419603   74918 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 18:19:04.419749   74918 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-401000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 18:19:04.724945   74918 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 18:19:04.886197   74918 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 18:19:04.984840   74918 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 18:19:04.984953   74918 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:19:05.109466   74918 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:19:05.288456   74918 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:19:05.480491   74918 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:19:05.642570   74918 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:19:05.643275   74918 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:19:05.664884   74918 out.go:204]   - Booting up control plane ...
	I1025 18:19:05.665045   74918 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:19:05.665250   74918 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:19:05.665454   74918 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:19:05.665605   74918 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:19:05.665914   74918 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:19:45.654221   74918 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1025 18:19:45.655241   74918 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:19:45.655474   74918 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:19:50.656503   74918 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:19:50.656778   74918 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:20:00.657424   74918 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:20:00.657568   74918 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:20:20.659669   74918 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:20:20.659874   74918 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:21:00.662433   74918 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:21:00.662844   74918 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:21:00.662874   74918 kubeadm.go:322] 
	I1025 18:21:00.662930   74918 kubeadm.go:322] Unfortunately, an error has occurred:
	I1025 18:21:00.663024   74918 kubeadm.go:322] 	timed out waiting for the condition
	I1025 18:21:00.663043   74918 kubeadm.go:322] 
	I1025 18:21:00.663096   74918 kubeadm.go:322] This error is likely caused by:
	I1025 18:21:00.663158   74918 kubeadm.go:322] 	- The kubelet is not running
	I1025 18:21:00.663330   74918 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 18:21:00.663350   74918 kubeadm.go:322] 
	I1025 18:21:00.663504   74918 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 18:21:00.663551   74918 kubeadm.go:322] 	- 'systemctl status kubelet'
	I1025 18:21:00.663597   74918 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I1025 18:21:00.663617   74918 kubeadm.go:322] 
	I1025 18:21:00.663758   74918 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 18:21:00.663898   74918 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1025 18:21:00.664029   74918 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I1025 18:21:00.664099   74918 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I1025 18:21:00.664202   74918 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I1025 18:21:00.664257   74918 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I1025 18:21:00.666589   74918 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1025 18:21:00.666739   74918 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1025 18:21:00.666935   74918 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I1025 18:21:00.667158   74918 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:21:00.667260   74918 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 18:21:00.667344   74918 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W1025 18:21:00.667444   74918 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-401000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-401000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-401000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-401000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1025 18:21:00.667489   74918 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1025 18:21:01.125139   74918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:21:01.140159   74918 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 18:21:01.140253   74918 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:21:01.151643   74918 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:21:01.151676   74918 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 18:21:01.207595   74918 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1025 18:21:01.207689   74918 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 18:21:01.508204   74918 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:21:01.508307   74918 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:21:01.508387   74918 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:21:01.734846   74918 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:21:01.736797   74918 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:21:01.746486   74918 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1025 18:21:01.821026   74918 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:21:01.844863   74918 out.go:204]   - Generating certificates and keys ...
	I1025 18:21:01.844958   74918 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 18:21:01.845048   74918 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 18:21:01.845141   74918 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 18:21:01.845205   74918 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1025 18:21:01.845306   74918 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 18:21:01.845378   74918 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1025 18:21:01.845460   74918 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1025 18:21:01.845571   74918 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1025 18:21:01.845717   74918 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 18:21:01.845869   74918 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 18:21:01.845930   74918 kubeadm.go:322] [certs] Using the existing "sa" key
	I1025 18:21:01.846028   74918 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:21:02.145432   74918 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:21:02.317363   74918 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:21:02.556980   74918 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:21:02.645503   74918 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:21:02.646168   74918 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:21:02.667965   74918 out.go:204]   - Booting up control plane ...
	I1025 18:21:02.668168   74918 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:21:02.668310   74918 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:21:02.668437   74918 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:21:02.668591   74918 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:21:02.668838   74918 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:21:42.656368   74918 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1025 18:21:42.656587   74918 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:21:42.656777   74918 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:21:47.658476   74918 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:21:47.658670   74918 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:21:57.659680   74918 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:21:57.659835   74918 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:22:17.662238   74918 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:22:17.662396   74918 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:22:57.664107   74918 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:22:57.664387   74918 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:22:57.664403   74918 kubeadm.go:322] 
	I1025 18:22:57.664459   74918 kubeadm.go:322] Unfortunately, an error has occurred:
	I1025 18:22:57.664546   74918 kubeadm.go:322] 	timed out waiting for the condition
	I1025 18:22:57.664558   74918 kubeadm.go:322] 
	I1025 18:22:57.664608   74918 kubeadm.go:322] This error is likely caused by:
	I1025 18:22:57.664695   74918 kubeadm.go:322] 	- The kubelet is not running
	I1025 18:22:57.664832   74918 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 18:22:57.664849   74918 kubeadm.go:322] 
	I1025 18:22:57.665049   74918 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 18:22:57.665151   74918 kubeadm.go:322] 	- 'systemctl status kubelet'
	I1025 18:22:57.665201   74918 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I1025 18:22:57.665215   74918 kubeadm.go:322] 
	I1025 18:22:57.665410   74918 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 18:22:57.665513   74918 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1025 18:22:57.665601   74918 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I1025 18:22:57.665653   74918 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I1025 18:22:57.665740   74918 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I1025 18:22:57.665772   74918 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I1025 18:22:57.667882   74918 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1025 18:22:57.667972   74918 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1025 18:22:57.668132   74918 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I1025 18:22:57.668235   74918 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:22:57.668360   74918 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 18:22:57.668454   74918 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1025 18:22:57.668486   74918 kubeadm.go:406] StartCluster complete in 3m55.015135055s
	I1025 18:22:57.668591   74918 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:22:57.694202   74918 logs.go:284] 0 containers: []
	W1025 18:22:57.694218   74918 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:22:57.694294   74918 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:22:57.720864   74918 logs.go:284] 0 containers: []
	W1025 18:22:57.720880   74918 logs.go:286] No container was found matching "etcd"
	I1025 18:22:57.720961   74918 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:22:57.747118   74918 logs.go:284] 0 containers: []
	W1025 18:22:57.747134   74918 logs.go:286] No container was found matching "coredns"
	I1025 18:22:57.747210   74918 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:22:57.769496   74918 logs.go:284] 0 containers: []
	W1025 18:22:57.769511   74918 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:22:57.769590   74918 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:22:57.791666   74918 logs.go:284] 0 containers: []
	W1025 18:22:57.791681   74918 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:22:57.791756   74918 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:22:57.814301   74918 logs.go:284] 0 containers: []
	W1025 18:22:57.814322   74918 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:22:57.814411   74918 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:22:57.835492   74918 logs.go:284] 0 containers: []
	W1025 18:22:57.835507   74918 logs.go:286] No container was found matching "kindnet"
	I1025 18:22:57.835517   74918 logs.go:123] Gathering logs for kubelet ...
	I1025 18:22:57.835527   74918 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:22:57.879847   74918 logs.go:123] Gathering logs for dmesg ...
	I1025 18:22:57.879864   74918 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:22:57.895705   74918 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:22:57.895721   74918 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:22:57.965642   74918 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:22:57.965653   74918 logs.go:123] Gathering logs for Docker ...
	I1025 18:22:57.965660   74918 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:22:57.984479   74918 logs.go:123] Gathering logs for container status ...
	I1025 18:22:57.984494   74918 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1025 18:22:58.045393   74918 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1025 18:22:58.045418   74918 out.go:239] * 
	* 
	W1025 18:22:58.045472   74918 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 18:22:58.045504   74918 out.go:239] * 
	* 
	W1025 18:22:58.046175   74918 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:22:58.108176   74918 out.go:177] 
	W1025 18:22:58.150019   74918 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 18:22:58.150048   74918 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1025 18:22:58.150059   74918 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1025 18:22:58.171106   74918 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-401000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-401000
version_upgrade_test.go:240: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-401000: (1.595123653s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-401000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-401000 status --format={{.Host}}: exit status 7 (116.16571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-401000 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-401000 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker : (4m38.970962766s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-401000 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-401000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-401000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (682.497611ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-401000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-401000
	    minikube start -p kubernetes-upgrade-401000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4010002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-401000 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-401000 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:288: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-401000 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker : (32.188792447s)
version_upgrade_test.go:292: *** TestKubernetesUpgrade FAILED at 2023-10-25 18:28:11.894877 -0700 PDT m=+2968.931126618
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-401000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-401000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "106edde5ef64c3a219e469bfecb642b91a498259cab3d83e7128b2fa2099fd61",
	        "Created": "2023-10-26T01:18:46.065801303Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225081,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:23:01.378195922Z",
	            "FinishedAt": "2023-10-26T01:22:58.745135713Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/106edde5ef64c3a219e469bfecb642b91a498259cab3d83e7128b2fa2099fd61/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/106edde5ef64c3a219e469bfecb642b91a498259cab3d83e7128b2fa2099fd61/hostname",
	        "HostsPath": "/var/lib/docker/containers/106edde5ef64c3a219e469bfecb642b91a498259cab3d83e7128b2fa2099fd61/hosts",
	        "LogPath": "/var/lib/docker/containers/106edde5ef64c3a219e469bfecb642b91a498259cab3d83e7128b2fa2099fd61/106edde5ef64c3a219e469bfecb642b91a498259cab3d83e7128b2fa2099fd61-json.log",
	        "Name": "/kubernetes-upgrade-401000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-401000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-401000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e2cc9b4da3649ac702208ec461783ecd95c5f7717925105d5388fce38c5730fc-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2cc9b4da3649ac702208ec461783ecd95c5f7717925105d5388fce38c5730fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2cc9b4da3649ac702208ec461783ecd95c5f7717925105d5388fce38c5730fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2cc9b4da3649ac702208ec461783ecd95c5f7717925105d5388fce38c5730fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-401000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-401000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-401000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-401000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-401000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ded3b1457d3ec006c31310f0b87bb1ea594c7491f737b03f0204e06b1cae34d1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58331"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58332"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58333"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58334"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58335"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ded3b1457d3e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-401000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "106edde5ef64",
	                        "kubernetes-upgrade-401000"
	                    ],
	                    "NetworkID": "106889f9448d7660d9862ffd8e55ff9ac13fc6b0ae05a4f818afa7ad2b6e3a57",
	                    "EndpointID": "1f95cd2804ecf5747044def5ad61f15ef337d94838041e3ecd7a9381c5de8392",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-401000 -n kubernetes-upgrade-401000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-401000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-401000 logs -n 25: (2.753871723s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-143000 sudo                               | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:26 PDT | 25 Oct 23 18:26 PDT |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo cat                           | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:26 PDT | 25 Oct 23 18:26 PDT |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo cat                           | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:26 PDT | 25 Oct 23 18:26 PDT |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo                               | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:26 PDT | 25 Oct 23 18:26 PDT |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo                               | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:26 PDT | 25 Oct 23 18:26 PDT |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo cat                           | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:26 PDT | 25 Oct 23 18:26 PDT |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo docker                        | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:26 PDT | 25 Oct 23 18:26 PDT |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo                               | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:26 PDT | 25 Oct 23 18:26 PDT |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo                               | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:26 PDT | 25 Oct 23 18:26 PDT |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo cat                           | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:26 PDT | 25 Oct 23 18:26 PDT |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo cat                           | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:26 PDT | 25 Oct 23 18:26 PDT |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo                               | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT | 25 Oct 23 18:27 PDT |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo                               | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT | 25 Oct 23 18:27 PDT |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo                               | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT | 25 Oct 23 18:27 PDT |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo cat                           | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT | 25 Oct 23 18:27 PDT |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo cat                           | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT | 25 Oct 23 18:27 PDT |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo                               | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT | 25 Oct 23 18:27 PDT |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo                               | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo                               | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT | 25 Oct 23 18:27 PDT |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo find                          | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT | 25 Oct 23 18:27 PDT |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-143000 sudo crio                          | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT | 25 Oct 23 18:27 PDT |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-143000                                    | kindnet-143000            | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT | 25 Oct 23 18:27 PDT |
	| start   | -p calico-143000 --memory=3072                       | calico-143000             | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=calico --driver=docker                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-401000                         | kubernetes-upgrade-401000 | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-401000                         | kubernetes-upgrade-401000 | jenkins | v1.31.2 | 25 Oct 23 18:27 PDT | 25 Oct 23 18:28 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 18:27:39
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 18:27:39.757540   77643 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:27:39.757741   77643 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:27:39.757747   77643 out.go:309] Setting ErrFile to fd 2...
	I1025 18:27:39.757751   77643 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:27:39.757936   77643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 18:27:39.759383   77643 out.go:303] Setting JSON to false
	I1025 18:27:39.781932   77643 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":34027,"bootTime":1698249632,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 18:27:39.782047   77643 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 18:27:39.803435   77643 out.go:177] * [kubernetes-upgrade-401000] minikube v1.31.2 on Darwin 14.0
	I1025 18:27:39.882274   77643 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 18:27:39.861323   77643 notify.go:220] Checking for updates...
	I1025 18:27:39.924178   77643 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:27:39.945713   77643 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 18:27:39.967440   77643 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:27:40.041264   77643 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 18:27:40.099317   77643 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:27:40.136823   77643 config.go:182] Loaded profile config "kubernetes-upgrade-401000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:27:40.137239   77643 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 18:27:40.204795   77643 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 18:27:40.204970   77643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:27:40.369808   77643 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2023-10-26 01:27:40.357092559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:27:40.411868   77643 out.go:177] * Using the docker driver based on existing profile
	I1025 18:27:40.433059   77643 start.go:298] selected driver: docker
	I1025 18:27:40.433073   77643 start.go:902] validating driver "docker" against &{Name:kubernetes-upgrade-401000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:kubernetes-upgrade-401000 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:27:40.433131   77643 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:27:40.436180   77643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:27:40.548545   77643 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2023-10-26 01:27:40.537735634 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:27:40.548818   77643 cni.go:84] Creating CNI manager for ""
	I1025 18:27:40.548836   77643 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:27:40.548849   77643 start_flags.go:323] config:
	{Name:kubernetes-upgrade-401000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:kubernetes-upgrade-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:27:40.590884   77643 out.go:177] * Starting control plane node kubernetes-upgrade-401000 in cluster kubernetes-upgrade-401000
	I1025 18:27:40.628066   77643 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 18:27:40.650117   77643 out.go:177] * Pulling base image ...
	I1025 18:27:40.708082   77643 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:27:40.708122   77643 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 18:27:40.708144   77643 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 18:27:40.708176   77643 cache.go:56] Caching tarball of preloaded images
	I1025 18:27:40.708400   77643 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 18:27:40.708419   77643 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 18:27:40.708968   77643 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/config.json ...
	I1025 18:27:40.782705   77643 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 18:27:40.782732   77643 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 18:27:40.782766   77643 cache.go:194] Successfully downloaded all kic artifacts
	I1025 18:27:40.782861   77643 start.go:365] acquiring machines lock for kubernetes-upgrade-401000: {Name:mk6409086ac74878831c315bb785a33c1dba8141 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:27:40.782967   77643 start.go:369] acquired machines lock for "kubernetes-upgrade-401000" in 80.162µs
	I1025 18:27:40.782994   77643 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:27:40.783004   77643 fix.go:54] fixHost starting: 
	I1025 18:27:40.783286   77643 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-401000 --format={{.State.Status}}
	I1025 18:27:40.840078   77643 fix.go:102] recreateIfNeeded on kubernetes-upgrade-401000: state=Running err=<nil>
	W1025 18:27:40.840127   77643 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 18:27:40.885092   77643 out.go:177] * Updating the running docker "kubernetes-upgrade-401000" container ...
	I1025 18:27:37.347797   77518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:27:37.847546   77518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:27:38.348510   77518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:27:38.847517   77518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:27:39.347477   77518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:27:39.847907   77518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:27:40.347586   77518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:27:40.847864   77518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:27:41.347676   77518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:27:41.576981   77518 kubeadm.go:1081] duration metric: took 11.08252248s to wait for elevateKubeSystemPrivileges.
	I1025 18:27:41.577012   77518 kubeadm.go:406] StartCluster complete in 23.32493062s
	I1025 18:27:41.577035   77518 settings.go:142] acquiring lock: {Name:mkca0a8fe84aa865309571104a1d51551b90d38c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:27:41.577155   77518 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:27:41.578354   77518 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/kubeconfig: {Name:mka2fd80159d21a18312620daab0f942465327a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:27:41.578695   77518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 18:27:41.578763   77518 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 18:27:41.578830   77518 addons.go:69] Setting storage-provisioner=true in profile "calico-143000"
	I1025 18:27:41.578868   77518 addons.go:69] Setting default-storageclass=true in profile "calico-143000"
	I1025 18:27:41.578886   77518 addons.go:231] Setting addon storage-provisioner=true in "calico-143000"
	I1025 18:27:41.578913   77518 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-143000"
	I1025 18:27:41.578971   77518 config.go:182] Loaded profile config "calico-143000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:27:41.579002   77518 host.go:66] Checking if "calico-143000" exists ...
	I1025 18:27:41.579400   77518 cli_runner.go:164] Run: docker container inspect calico-143000 --format={{.State.Status}}
	I1025 18:27:41.580756   77518 cli_runner.go:164] Run: docker container inspect calico-143000 --format={{.State.Status}}
	I1025 18:27:41.657800   77518 addons.go:231] Setting addon default-storageclass=true in "calico-143000"
	I1025 18:27:41.657835   77518 host.go:66] Checking if "calico-143000" exists ...
	I1025 18:27:41.658219   77518 cli_runner.go:164] Run: docker container inspect calico-143000 --format={{.State.Status}}
	I1025 18:27:41.682873   77518 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:27:41.677131   77518 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-143000" context rescaled to 1 replicas
	I1025 18:27:41.719269   77518 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:27:41.756255   77518 out.go:177] * Verifying Kubernetes components...
	I1025 18:27:41.719379   77518 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:27:41.756292   77518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 18:27:41.816505   77518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:27:41.816540   77518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-143000
	I1025 18:27:41.824984   77518 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 18:27:41.825004   77518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 18:27:41.825093   77518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-143000
	I1025 18:27:41.883653   77518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58787 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/calico-143000/id_rsa Username:docker}
	I1025 18:27:41.899678   77518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58787 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/calico-143000/id_rsa Username:docker}
	I1025 18:27:42.078591   77518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 18:27:42.078745   77518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-143000
	I1025 18:27:42.165988   77518 node_ready.go:35] waiting up to 15m0s for node "calico-143000" to be "Ready" ...
	I1025 18:27:42.180842   77518 node_ready.go:49] node "calico-143000" has status "Ready":"True"
	I1025 18:27:42.180867   77518 node_ready.go:38] duration metric: took 14.836469ms waiting for node "calico-143000" to be "Ready" ...
	I1025 18:27:42.180885   77518 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 18:27:42.197312   77518 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace to be "Ready" ...
	I1025 18:27:42.376689   77518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 18:27:42.382721   77518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:27:44.105949   77518 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.002418982s)
	I1025 18:27:44.105965   77518 start.go:926] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1025 18:27:44.105989   77518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.729224617s)
	I1025 18:27:44.303668   77518 pod_ready.go:102] pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace has status "Ready":"False"
	I1025 18:27:44.319236   77518 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.936422931s)
	I1025 18:27:44.344151   77518 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1025 18:27:40.922112   77643 machine.go:88] provisioning docker machine ...
	I1025 18:27:40.922166   77643 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-401000"
	I1025 18:27:40.922301   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:27:40.983118   77643 main.go:141] libmachine: Using SSH client type: native
	I1025 18:27:40.983486   77643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 58331 <nil> <nil>}
	I1025 18:27:40.983499   77643 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-401000 && echo "kubernetes-upgrade-401000" | sudo tee /etc/hostname
	I1025 18:27:41.129353   77643 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-401000
	
	I1025 18:27:41.129454   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:27:41.189644   77643 main.go:141] libmachine: Using SSH client type: native
	I1025 18:27:41.189998   77643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 58331 <nil> <nil>}
	I1025 18:27:41.190021   77643 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-401000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-401000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-401000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:27:41.315251   77643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:27:41.315275   77643 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 18:27:41.315296   77643 ubuntu.go:177] setting up certificates
	I1025 18:27:41.315311   77643 provision.go:83] configureAuth start
	I1025 18:27:41.315394   77643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-401000
	I1025 18:27:41.381889   77643 provision.go:138] copyHostCerts
	I1025 18:27:41.382017   77643 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 18:27:41.382032   77643 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:27:41.382240   77643 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 18:27:41.382509   77643 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 18:27:41.382519   77643 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:27:41.382597   77643 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 18:27:41.382836   77643 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 18:27:41.382844   77643 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:27:41.382919   77643 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 18:27:41.383135   77643 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-401000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-401000]
	I1025 18:27:41.485139   77643 provision.go:172] copyRemoteCerts
	I1025 18:27:41.485242   77643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:27:41.485330   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:27:41.548010   77643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58331 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa Username:docker}
	I1025 18:27:41.641086   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 18:27:41.671916   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 18:27:41.714409   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 18:27:41.739473   77643 provision.go:86] duration metric: configureAuth took 424.134292ms
	I1025 18:27:41.739492   77643 ubuntu.go:193] setting minikube options for container-runtime
	I1025 18:27:41.739628   77643 config.go:182] Loaded profile config "kubernetes-upgrade-401000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:27:41.739694   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:27:41.826670   77643 main.go:141] libmachine: Using SSH client type: native
	I1025 18:27:41.827094   77643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 58331 <nil> <nil>}
	I1025 18:27:41.827108   77643 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:27:41.970800   77643 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 18:27:41.970820   77643 ubuntu.go:71] root file system type: overlay
	I1025 18:27:41.970919   77643 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:27:41.971004   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:27:42.036398   77643 main.go:141] libmachine: Using SSH client type: native
	I1025 18:27:42.036708   77643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 58331 <nil> <nil>}
	I1025 18:27:42.036784   77643 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:27:42.190758   77643 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:27:42.190923   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:27:42.249667   77643 main.go:141] libmachine: Using SSH client type: native
	I1025 18:27:42.249962   77643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 58331 <nil> <nil>}
	I1025 18:27:42.249976   77643 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:27:42.392564   77643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:27:42.392606   77643 machine.go:91] provisioned docker machine in 1.470428942s
	I1025 18:27:42.392657   77643 start.go:300] post-start starting for "kubernetes-upgrade-401000" (driver="docker")
	I1025 18:27:42.392723   77643 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:27:42.392898   77643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:27:42.393020   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:27:42.448530   77643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58331 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa Username:docker}
	I1025 18:27:42.541474   77643 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:27:42.545669   77643 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 18:27:42.545694   77643 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 18:27:42.545701   77643 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 18:27:42.545705   77643 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 18:27:42.545715   77643 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 18:27:42.545817   77643 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 18:27:42.545968   77643 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 18:27:42.546146   77643 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:27:42.555484   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:27:42.590292   77643 start.go:303] post-start completed in 197.596928ms
	I1025 18:27:42.590409   77643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 18:27:42.590560   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:27:42.649323   77643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58331 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa Username:docker}
	I1025 18:27:42.739315   77643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 18:27:42.744628   77643 fix.go:56] fixHost completed within 1.96156283s
	I1025 18:27:42.744645   77643 start.go:83] releasing machines lock for "kubernetes-upgrade-401000", held for 1.961608914s
	I1025 18:27:42.744726   77643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-401000
	I1025 18:27:42.810279   77643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:27:42.810282   77643 ssh_runner.go:195] Run: cat /version.json
	I1025 18:27:42.810371   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:27:42.810372   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:27:42.869213   77643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58331 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa Username:docker}
	I1025 18:27:42.869252   77643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58331 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa Username:docker}
	I1025 18:27:43.088840   77643 ssh_runner.go:195] Run: systemctl --version
	I1025 18:27:43.100541   77643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 18:27:43.107839   77643 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 18:27:43.107905   77643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 18:27:43.117261   77643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 18:27:43.127156   77643 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1025 18:27:43.127168   77643 start.go:472] detecting cgroup driver to use...
	I1025 18:27:43.127184   77643 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:27:43.127304   77643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:27:43.143477   77643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 18:27:43.154129   77643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:27:43.164933   77643 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:27:43.165016   77643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:27:43.179644   77643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:27:43.200640   77643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:27:43.214188   77643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:27:43.226127   77643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:27:43.235511   77643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:27:43.245853   77643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:27:43.255783   77643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:27:43.265199   77643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:27:43.345847   77643 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:27:44.401724   77518 addons.go:502] enable addons completed in 2.82287661s: enabled=[default-storageclass storage-provisioner]
	I1025 18:27:46.307226   77518 pod_ready.go:102] pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace has status "Ready":"False"
	I1025 18:27:48.880459   77518 pod_ready.go:102] pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace has status "Ready":"False"
	I1025 18:27:51.385500   77518 pod_ready.go:102] pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace has status "Ready":"False"
	I1025 18:27:53.541652   77643 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.195457538s)
	I1025 18:27:53.541675   77643 start.go:472] detecting cgroup driver to use...
	I1025 18:27:53.541688   77643 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:27:53.541761   77643 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:27:53.561483   77643 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 18:27:53.561555   77643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:27:53.577510   77643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:27:53.607536   77643 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:27:53.617061   77643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:27:53.634891   77643 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:27:53.676116   77643 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:27:53.803261   77643 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:27:53.911621   77643 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:27:53.911749   77643 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:27:53.963540   77643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:27:54.078723   77643 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:27:54.455982   77643 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:27:54.558744   77643 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 18:27:54.636509   77643 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:27:54.713271   77643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:27:54.776806   77643 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 18:27:54.819856   77643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:27:54.901096   77643 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 18:27:55.016543   77643 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 18:27:55.016668   77643 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 18:27:55.023404   77643 start.go:540] Will wait 60s for crictl version
	I1025 18:27:55.023463   77643 ssh_runner.go:195] Run: which crictl
	I1025 18:27:55.028810   77643 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 18:27:55.083381   77643 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 18:27:55.083506   77643 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:27:55.125743   77643 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:27:53.876726   77518 pod_ready.go:102] pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace has status "Ready":"False"
	I1025 18:27:56.308364   77518 pod_ready.go:102] pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace has status "Ready":"False"
	I1025 18:27:55.173772   77643 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 18:27:55.173874   77643 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-401000 dig +short host.docker.internal
	I1025 18:27:55.326020   77643 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 18:27:55.326119   77643 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 18:27:55.331850   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:27:55.389401   77643 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:27:55.389509   77643 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:27:55.422477   77643 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1025 18:27:55.422499   77643 docker.go:623] Images already preloaded, skipping extraction
	I1025 18:27:55.422572   77643 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:27:55.444255   77643 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1025 18:27:55.444276   77643 cache_images.go:84] Images are preloaded, skipping loading
	I1025 18:27:55.444367   77643 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:27:55.516253   77643 cni.go:84] Creating CNI manager for ""
	I1025 18:27:55.516280   77643 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:27:55.516312   77643 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 18:27:55.516332   77643 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-401000 NodeName:kubernetes-upgrade-401000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 18:27:55.516452   77643 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-401000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:27:55.516534   77643 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-401000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:kubernetes-upgrade-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 18:27:55.516598   77643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 18:27:55.527251   77643 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:27:55.527314   77643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 18:27:55.536994   77643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I1025 18:27:55.554740   77643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:27:55.572520   77643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I1025 18:27:55.601087   77643 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 18:27:55.609834   77643 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000 for IP: 192.168.76.2
	I1025 18:27:55.609871   77643 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:27:55.610189   77643 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 18:27:55.610310   77643 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 18:27:55.610494   77643 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/client.key
	I1025 18:27:55.610650   77643 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.key.31bdca25
	I1025 18:27:55.610776   77643 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/proxy-client.key
	I1025 18:27:55.611236   77643 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 18:27:55.611305   77643 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 18:27:55.611319   77643 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 18:27:55.611365   77643 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 18:27:55.611409   77643 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:27:55.611446   77643 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 18:27:55.611535   77643 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:27:55.612243   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 18:27:55.637779   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 18:27:55.660795   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 18:27:55.688898   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 18:27:55.722448   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:27:55.745820   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 18:27:55.769073   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:27:55.804969   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:27:55.833283   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 18:27:55.857363   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:27:55.885678   77643 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 18:27:55.922793   77643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 18:27:55.940943   77643 ssh_runner.go:195] Run: openssl version
	I1025 18:27:55.947232   77643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 18:27:55.957581   77643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 18:27:55.962040   77643 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:27:55.962085   77643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 18:27:55.969257   77643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:27:55.982747   77643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:27:56.002826   77643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:27:56.011086   77643 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:27:56.011163   77643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:27:56.020176   77643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:27:56.030023   77643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 18:27:56.040620   77643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 18:27:56.045230   77643 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:27:56.045283   77643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 18:27:56.052394   77643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 18:27:56.062014   77643 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 18:27:56.066630   77643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 18:27:56.074410   77643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 18:27:56.085918   77643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 18:27:56.098826   77643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 18:27:56.111235   77643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 18:27:56.120195   77643 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 18:27:56.127325   77643 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-401000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:kubernetes-upgrade-401000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:27:56.127429   77643 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:27:56.147659   77643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 18:27:56.157363   77643 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1025 18:27:56.157383   77643 kubeadm.go:636] restartCluster start
	I1025 18:27:56.157434   77643 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 18:27:56.166450   77643 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:27:56.166528   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:27:56.236345   77643 kubeconfig.go:92] found "kubernetes-upgrade-401000" server: "https://127.0.0.1:58335"
	I1025 18:27:56.237072   77643 kapi.go:59] client config for kubernetes-upgrade-401000: &rest.Config{Host:"https://127.0.0.1:58335", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:27:56.237755   77643 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 18:27:56.247590   77643 api_server.go:166] Checking apiserver status ...
	I1025 18:27:56.247642   77643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:27:56.258448   77643 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:27:56.258461   77643 api_server.go:166] Checking apiserver status ...
	I1025 18:27:56.258522   77643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:27:56.268977   77643 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:27:56.769201   77643 api_server.go:166] Checking apiserver status ...
	I1025 18:27:56.769319   77643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:27:56.785381   77643 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:27:57.269484   77643 api_server.go:166] Checking apiserver status ...
	I1025 18:27:57.269550   77643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:27:57.282130   77643 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:27:57.769327   77643 api_server.go:166] Checking apiserver status ...
	I1025 18:27:57.769478   77643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:27:57.782266   77643 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:27:58.269369   77643 api_server.go:166] Checking apiserver status ...
	I1025 18:27:58.269463   77643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:27:58.354526   77643 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:27:58.769191   77643 api_server.go:166] Checking apiserver status ...
	I1025 18:27:58.769366   77643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:27:58.860605   77643 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/13983/cgroup
	W1025 18:27:58.879660   77643 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/13983/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:27:58.879755   77643 ssh_runner.go:195] Run: ls
	I1025 18:27:58.955200   77643 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58335/healthz ...
	I1025 18:27:58.957137   77643 api_server.go:269] stopped: https://127.0.0.1:58335/healthz: Get "https://127.0.0.1:58335/healthz": EOF
	I1025 18:27:58.957178   77643 retry.go:31] will retry after 198.241435ms: state is "Stopped"
	I1025 18:27:59.157496   77643 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58335/healthz ...
	I1025 18:27:58.804644   77518 pod_ready.go:102] pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace has status "Ready":"False"
	I1025 18:28:00.807620   77518 pod_ready.go:102] pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace has status "Ready":"False"
	I1025 18:28:01.260932   77643 api_server.go:279] https://127.0.0.1:58335/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 18:28:01.260970   77643 retry.go:31] will retry after 370.707542ms: https://127.0.0.1:58335/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 18:28:01.632033   77643 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58335/healthz ...
	I1025 18:28:01.637411   77643 api_server.go:279] https://127.0.0.1:58335/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:28:01.637429   77643 retry.go:31] will retry after 396.2963ms: https://127.0.0.1:58335/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:28:02.033867   77643 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58335/healthz ...
	I1025 18:28:02.039714   77643 api_server.go:279] https://127.0.0.1:58335/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:28:02.039736   77643 retry.go:31] will retry after 432.464281ms: https://127.0.0.1:58335/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:28:02.472401   77643 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58335/healthz ...
	I1025 18:28:02.480742   77643 api_server.go:279] https://127.0.0.1:58335/healthz returned 200:
	ok
	I1025 18:28:02.500776   77643 system_pods.go:86] 5 kube-system pods found
	I1025 18:28:02.500805   77643 system_pods.go:89] "etcd-kubernetes-upgrade-401000" [bc70a2e2-fc87-474c-a481-3ef37c096782] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 18:28:02.500822   77643 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-401000" [599c0076-3fbe-470d-ac22-1196a23c79e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 18:28:02.500842   77643 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-401000" [f4be5561-f324-44d7-8aa8-a1d468016b34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 18:28:02.500860   77643 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-401000" [a589000e-fa14-4ce1-b3bc-9384d4131f4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 18:28:02.500871   77643 system_pods.go:89] "storage-provisioner" [13677a2c-3ada-490d-997d-b75d1b7a7528] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1025 18:28:02.500883   77643 kubeadm.go:620] needs reconfigure: missing components: kube-dns, kube-proxy
	I1025 18:28:02.500894   77643 kubeadm.go:1128] stopping kube-system containers ...
	I1025 18:28:02.500998   77643 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:28:02.529934   77643 docker.go:464] Stopping containers: [19b8f2a32a0c 10879cc5870c 4bdf393dd99e 7fe3d0f5cb39 36378ecf863e 8a614aa67169 8c62e1428377 e8738113c0c1 3eb1fe98f1cf 3cb7793c25c0 27309db933e3 c45e35f4c036 b9806481c9a2 368912a8de75 2cad05b58f02 8e7ce1042002]
	I1025 18:28:02.530016   77643 ssh_runner.go:195] Run: docker stop 19b8f2a32a0c 10879cc5870c 4bdf393dd99e 7fe3d0f5cb39 36378ecf863e 8a614aa67169 8c62e1428377 e8738113c0c1 3eb1fe98f1cf 3cb7793c25c0 27309db933e3 c45e35f4c036 b9806481c9a2 368912a8de75 2cad05b58f02 8e7ce1042002
	I1025 18:28:03.192551   77643 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 18:28:03.283423   77643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:28:03.356961   77643 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Oct 26 01:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 26 01:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Oct 26 01:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 26 01:27 /etc/kubernetes/scheduler.conf
	
	I1025 18:28:03.357030   77643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 18:28:03.368943   77643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 18:28:03.383273   77643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 18:28:03.399160   77643 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:28:03.399239   77643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 18:28:03.418183   77643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 18:28:03.430583   77643 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:28:03.430664   77643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 18:28:03.456078   77643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:28:03.467151   77643 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1025 18:28:03.467165   77643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:28:03.537469   77643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:28:04.402875   77643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:28:04.570533   77643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:28:04.655802   77643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:28:04.723731   77643 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:28:04.723809   77643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:28:02.809397   77518 pod_ready.go:102] pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace has status "Ready":"False"
	I1025 18:28:05.309371   77518 pod_ready.go:102] pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace has status "Ready":"False"
	I1025 18:28:04.769047   77643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:28:05.358194   77643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:28:05.858583   77643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:28:05.872403   77643 api_server.go:72] duration metric: took 1.148638085s to wait for apiserver process to appear ...
	I1025 18:28:05.872416   77643 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:28:05.872426   77643 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58335/healthz ...
	I1025 18:28:08.460597   77643 api_server.go:279] https://127.0.0.1:58335/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 18:28:08.460625   77643 api_server.go:103] status: https://127.0.0.1:58335/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 18:28:08.460641   77643 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58335/healthz ...
	I1025 18:28:08.557991   77643 api_server.go:279] https://127.0.0.1:58335/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:28:08.558032   77643 api_server.go:103] status: https://127.0.0.1:58335/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:28:09.058269   77643 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58335/healthz ...
	I1025 18:28:09.063548   77643 api_server.go:279] https://127.0.0.1:58335/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:28:09.063565   77643 api_server.go:103] status: https://127.0.0.1:58335/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:28:09.558329   77643 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58335/healthz ...
	I1025 18:28:09.564902   77643 api_server.go:279] https://127.0.0.1:58335/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:28:09.564919   77643 api_server.go:103] status: https://127.0.0.1:58335/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:28:10.058796   77643 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58335/healthz ...
	I1025 18:28:10.065359   77643 api_server.go:279] https://127.0.0.1:58335/healthz returned 200:
	ok
	I1025 18:28:10.072298   77643 api_server.go:141] control plane version: v1.28.3
	I1025 18:28:10.072311   77643 api_server.go:131] duration metric: took 4.199763551s to wait for apiserver health ...
	I1025 18:28:10.072316   77643 cni.go:84] Creating CNI manager for ""
	I1025 18:28:10.072325   77643 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:28:10.094671   77643 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 18:28:10.117604   77643 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 18:28:10.127719   77643 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1025 18:28:10.145126   77643 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 18:28:10.151549   77643 system_pods.go:59] 5 kube-system pods found
	I1025 18:28:10.151564   77643 system_pods.go:61] "etcd-kubernetes-upgrade-401000" [bc70a2e2-fc87-474c-a481-3ef37c096782] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 18:28:10.151572   77643 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-401000" [599c0076-3fbe-470d-ac22-1196a23c79e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 18:28:10.151584   77643 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-401000" [f4be5561-f324-44d7-8aa8-a1d468016b34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 18:28:10.151596   77643 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-401000" [a589000e-fa14-4ce1-b3bc-9384d4131f4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 18:28:10.151600   77643 system_pods.go:61] "storage-provisioner" [13677a2c-3ada-490d-997d-b75d1b7a7528] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1025 18:28:10.151606   77643 system_pods.go:74] duration metric: took 6.468216ms to wait for pod list to return data ...
	I1025 18:28:10.151611   77643 node_conditions.go:102] verifying NodePressure condition ...
	I1025 18:28:10.155235   77643 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:28:10.155251   77643 node_conditions.go:123] node cpu capacity is 12
	I1025 18:28:10.155261   77643 node_conditions.go:105] duration metric: took 3.645727ms to run NodePressure ...
	I1025 18:28:10.155273   77643 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:28:10.428297   77643 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 18:28:10.437732   77643 ops.go:34] apiserver oom_adj: -16
	I1025 18:28:10.437750   77643 kubeadm.go:640] restartCluster took 14.279925838s
	I1025 18:28:10.437758   77643 kubeadm.go:406] StartCluster complete in 14.310007066s
	I1025 18:28:10.437771   77643 settings.go:142] acquiring lock: {Name:mkca0a8fe84aa865309571104a1d51551b90d38c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:28:10.437847   77643 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:28:10.438513   77643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/kubeconfig: {Name:mka2fd80159d21a18312620daab0f942465327a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:28:10.438816   77643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 18:28:10.438840   77643 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 18:28:10.438882   77643 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-401000"
	I1025 18:28:10.438900   77643 addons.go:231] Setting addon storage-provisioner=true in "kubernetes-upgrade-401000"
	I1025 18:28:10.438899   77643 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-401000"
	W1025 18:28:10.438906   77643 addons.go:240] addon storage-provisioner should already be in state true
	I1025 18:28:10.438956   77643 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-401000"
	I1025 18:28:10.438960   77643 config.go:182] Loaded profile config "kubernetes-upgrade-401000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:28:10.438968   77643 host.go:66] Checking if "kubernetes-upgrade-401000" exists ...
	I1025 18:28:10.439197   77643 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-401000 --format={{.State.Status}}
	I1025 18:28:10.439329   77643 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-401000 --format={{.State.Status}}
	I1025 18:28:10.439805   77643 kapi.go:59] client config for kubernetes-upgrade-401000: &rest.Config{Host:"https://127.0.0.1:58335", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:28:10.445910   77643 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-401000" context rescaled to 1 replicas
	I1025 18:28:10.445953   77643 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:28:10.467623   77643 out.go:177] * Verifying Kubernetes components...
	I1025 18:28:10.540493   77643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:28:10.573283   77643 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:28:10.553873   77643 kapi.go:59] client config for kubernetes-upgrade-401000: &rest.Config{Host:"https://127.0.0.1:58335", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubernetes-upgrade-401000/client.key", CAFile:"/Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f8260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:28:10.554377   77643 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1025 18:28:10.560665   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:28:10.573789   77643 addons.go:231] Setting addon default-storageclass=true in "kubernetes-upgrade-401000"
	W1025 18:28:10.610403   77643 addons.go:240] addon default-storageclass should already be in state true
	I1025 18:28:10.610439   77643 host.go:66] Checking if "kubernetes-upgrade-401000" exists ...
	I1025 18:28:10.610468   77643 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:28:10.610480   77643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 18:28:10.610586   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:28:10.612022   77643 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-401000 --format={{.State.Status}}
	I1025 18:28:10.703581   77643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58331 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa Username:docker}
	I1025 18:28:10.703634   77643 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:28:10.703746   77643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:28:10.703804   77643 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 18:28:10.703823   77643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 18:28:10.703995   77643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-401000
	I1025 18:28:10.728350   77643 api_server.go:72] duration metric: took 282.35313ms to wait for apiserver process to appear ...
	I1025 18:28:10.728389   77643 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:28:10.728416   77643 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58335/healthz ...
	I1025 18:28:10.737092   77643 api_server.go:279] https://127.0.0.1:58335/healthz returned 200:
	ok
	I1025 18:28:10.739240   77643 api_server.go:141] control plane version: v1.28.3
	I1025 18:28:10.739254   77643 api_server.go:131] duration metric: took 10.857888ms to wait for apiserver health ...
	I1025 18:28:10.739260   77643 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 18:28:10.745495   77643 system_pods.go:59] 5 kube-system pods found
	I1025 18:28:10.745515   77643 system_pods.go:61] "etcd-kubernetes-upgrade-401000" [bc70a2e2-fc87-474c-a481-3ef37c096782] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 18:28:10.745525   77643 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-401000" [599c0076-3fbe-470d-ac22-1196a23c79e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 18:28:10.745542   77643 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-401000" [f4be5561-f324-44d7-8aa8-a1d468016b34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 18:28:10.745553   77643 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-401000" [a589000e-fa14-4ce1-b3bc-9384d4131f4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 18:28:10.745562   77643 system_pods.go:61] "storage-provisioner" [13677a2c-3ada-490d-997d-b75d1b7a7528] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1025 18:28:10.745577   77643 system_pods.go:74] duration metric: took 6.309775ms to wait for pod list to return data ...
	I1025 18:28:10.745590   77643 kubeadm.go:581] duration metric: took 299.603869ms to wait for : map[apiserver:true system_pods:true] ...
	I1025 18:28:10.745605   77643 node_conditions.go:102] verifying NodePressure condition ...
	I1025 18:28:10.754570   77643 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:28:10.754585   77643 node_conditions.go:123] node cpu capacity is 12
	I1025 18:28:10.754595   77643 node_conditions.go:105] duration metric: took 8.985802ms to run NodePressure ...
	I1025 18:28:10.754603   77643 start.go:228] waiting for startup goroutines ...
	I1025 18:28:10.775086   77643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58331 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/kubernetes-upgrade-401000/id_rsa Username:docker}
	I1025 18:28:10.832036   77643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:28:10.905611   77643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 18:28:11.699048   77643 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 18:28:11.720068   77643 addons.go:502] enable addons completed in 1.281191434s: enabled=[storage-provisioner default-storageclass]
	I1025 18:28:11.720098   77643 start.go:233] waiting for cluster config update ...
	I1025 18:28:11.720114   77643 start.go:242] writing updated cluster config ...
	I1025 18:28:11.720989   77643 ssh_runner.go:195] Run: rm -f paused
	I1025 18:28:11.762515   77643 start.go:600] kubectl: 1.27.2, cluster: 1.28.3 (minor skew: 1)
	I1025 18:28:11.800136   77643 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-401000" cluster and "default" namespace by default
	I1025 18:28:07.309789   77518 pod_ready.go:102] pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace has status "Ready":"False"
	I1025 18:28:09.806705   77518 pod_ready.go:102] pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace has status "Ready":"False"
	I1025 18:28:11.863452   77518 pod_ready.go:102] pod "calico-kube-controllers-558d465845-m6csc" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* Oct 26 01:27:55 kubernetes-upgrade-401000 cri-dockerd[13348]: time="2023-10-26T01:27:55Z" level=info msg="Setting cgroupDriver cgroupfs"
	Oct 26 01:27:55 kubernetes-upgrade-401000 cri-dockerd[13348]: time="2023-10-26T01:27:55Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Oct 26 01:27:55 kubernetes-upgrade-401000 cri-dockerd[13348]: time="2023-10-26T01:27:55Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Oct 26 01:27:55 kubernetes-upgrade-401000 cri-dockerd[13348]: time="2023-10-26T01:27:55Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 26 01:27:55 kubernetes-upgrade-401000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Oct 26 01:27:58 kubernetes-upgrade-401000 cri-dockerd[13348]: time="2023-10-26T01:27:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8a614aa67169a0c822d5b3be71bd94630681a42e1c4a8405dbea6c169c824322/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:27:58 kubernetes-upgrade-401000 cri-dockerd[13348]: time="2023-10-26T01:27:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e8738113c0c1f2c940eb09a5b72558c35b7a994377d5c1a7b47c023ac2078303/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:27:58 kubernetes-upgrade-401000 cri-dockerd[13348]: time="2023-10-26T01:27:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8c62e14283777c9be090916a356148328d494177201dc03cda471c0cec46f1b4/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:27:58 kubernetes-upgrade-401000 cri-dockerd[13348]: time="2023-10-26T01:27:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/36378ecf863ee5834c0214aeccf46199d2c7051a134ea745866d6d4ba3edf0cf/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:28:02 kubernetes-upgrade-401000 dockerd[13055]: time="2023-10-26T01:28:02.662007805Z" level=info msg="ignoring event" container=8c62e14283777c9be090916a356148328d494177201dc03cda471c0cec46f1b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 01:28:02 kubernetes-upgrade-401000 dockerd[13055]: time="2023-10-26T01:28:02.668230530Z" level=info msg="ignoring event" container=36378ecf863ee5834c0214aeccf46199d2c7051a134ea745866d6d4ba3edf0cf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 01:28:02 kubernetes-upgrade-401000 dockerd[13055]: time="2023-10-26T01:28:02.670383383Z" level=info msg="ignoring event" container=e8738113c0c1f2c940eb09a5b72558c35b7a994377d5c1a7b47c023ac2078303 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 01:28:02 kubernetes-upgrade-401000 dockerd[13055]: time="2023-10-26T01:28:02.670425463Z" level=info msg="ignoring event" container=7fe3d0f5cb399b58bf6e37004dd89c1728bfeadfb480a66705d4e0d51450bd93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 01:28:02 kubernetes-upgrade-401000 dockerd[13055]: time="2023-10-26T01:28:02.671399105Z" level=info msg="ignoring event" container=19b8f2a32a0c73b8403cca31b9be1421ded5c608f5605a325237965877014d2d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 01:28:02 kubernetes-upgrade-401000 dockerd[13055]: time="2023-10-26T01:28:02.753588058Z" level=info msg="ignoring event" container=8a614aa67169a0c822d5b3be71bd94630681a42e1c4a8405dbea6c169c824322 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 01:28:02 kubernetes-upgrade-401000 dockerd[13055]: time="2023-10-26T01:28:02.765628020Z" level=info msg="ignoring event" container=10879cc5870cb6ab22b3ec58d131f90e2f341ac4f7d2dc69fba8790a9352298c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 01:28:03 kubernetes-upgrade-401000 dockerd[13055]: time="2023-10-26T01:28:03.102444268Z" level=info msg="ignoring event" container=4bdf393dd99ee46e540c49e3d726f924b7553b456c1a8ff1f27f545d5c1ea3b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 01:28:03 kubernetes-upgrade-401000 cri-dockerd[13348]: time="2023-10-26T01:28:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e53910a52d364d3ed04e63c7197f51786788424aa7726a83f00aea9bc3cf2811/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:28:03 kubernetes-upgrade-401000 cri-dockerd[13348]: W1026 01:28:03.293547   13348 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Oct 26 01:28:03 kubernetes-upgrade-401000 cri-dockerd[13348]: time="2023-10-26T01:28:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6d4bb264d1e0a09c9b0fbb55c9680da6fe7478848c714068cf03d918fbf522ff/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:28:03 kubernetes-upgrade-401000 cri-dockerd[13348]: W1026 01:28:03.294271   13348 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Oct 26 01:28:03 kubernetes-upgrade-401000 cri-dockerd[13348]: time="2023-10-26T01:28:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/24768ee344a8857d62983e393766ad008b136a76af21d74b7697f878018796a5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:28:03 kubernetes-upgrade-401000 cri-dockerd[13348]: W1026 01:28:03.360985   13348 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Oct 26 01:28:03 kubernetes-upgrade-401000 cri-dockerd[13348]: time="2023-10-26T01:28:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6488e6859956af23e18db025f1c37b8a272856e887fa56944cae6f2049ad45dd/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Oct 26 01:28:03 kubernetes-upgrade-401000 cri-dockerd[13348]: W1026 01:28:03.461826   13348 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bd270bf3f9bb0       6d1b4fd1b182d       8 seconds ago       Running             kube-scheduler            2                   24768ee344a88       kube-scheduler-kubernetes-upgrade-401000
	1f70d8e4ce32d       10baa1ca17068       8 seconds ago       Running             kube-controller-manager   2                   e53910a52d364       kube-controller-manager-kubernetes-upgrade-401000
	990992cb323fb       73deb9a3f7025       8 seconds ago       Running             etcd                      2                   6d4bb264d1e0a       etcd-kubernetes-upgrade-401000
	08fe0cfa12dbe       5374347291230       8 seconds ago       Running             kube-apiserver            2                   6488e6859956a       kube-apiserver-kubernetes-upgrade-401000
	19b8f2a32a0c7       6d1b4fd1b182d       15 seconds ago      Exited              kube-scheduler            1                   e8738113c0c1f       kube-scheduler-kubernetes-upgrade-401000
	10879cc5870cb       73deb9a3f7025       15 seconds ago      Exited              etcd                      1                   8c62e14283777       etcd-kubernetes-upgrade-401000
	4bdf393dd99ee       5374347291230       15 seconds ago      Exited              kube-apiserver            1                   36378ecf863ee       kube-apiserver-kubernetes-upgrade-401000
	7fe3d0f5cb399       10baa1ca17068       15 seconds ago      Exited              kube-controller-manager   1                   8a614aa67169a       kube-controller-manager-kubernetes-upgrade-401000
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-401000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-401000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=kubernetes-upgrade-401000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T18_27_36_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Oct 2023 01:27:33 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-401000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 26 Oct 2023 01:28:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Oct 2023 01:28:08 +0000   Thu, 26 Oct 2023 01:27:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Oct 2023 01:28:08 +0000   Thu, 26 Oct 2023 01:27:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Oct 2023 01:28:08 +0000   Thu, 26 Oct 2023 01:27:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Oct 2023 01:28:08 +0000   Thu, 26 Oct 2023 01:27:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-401000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6081864Ki
	  pods:               110
	System Info:
	  Machine ID:                 5cbb696afc1941758a187611bac3d3a2
	  System UUID:                5cbb696afc1941758a187611bac3d3a2
	  Boot ID:                    97028b5e-c1fe-46d5-abb1-881a12fedf72
	  Kernel Version:             6.4.16-linuxkit
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-401000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         37s
	  kube-system                 kube-apiserver-kubernetes-upgrade-401000             250m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-401000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-scheduler-kubernetes-upgrade-401000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet  Node kubernetes-upgrade-401000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet  Node kubernetes-upgrade-401000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x7 over 43s)  kubelet  Node kubernetes-upgrade-401000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 37s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s                kubelet  Node kubernetes-upgrade-401000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s                kubelet  Node kubernetes-upgrade-401000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s                kubelet  Node kubernetes-upgrade-401000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                35s                kubelet  Node kubernetes-upgrade-401000 status is now: NodeReady
	  Normal  Starting                 9s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet  Node kubernetes-upgrade-401000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet  Node kubernetes-upgrade-401000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet  Node kubernetes-upgrade-401000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [10879cc5870c] <==
	* {"level":"info","ts":"2023-10-26T01:27:58.9607Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-26T01:28:00.071675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-26T01:28:00.071722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-26T01:28:00.071733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-10-26T01:28:00.071741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-10-26T01:28:00.071744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-10-26T01:28:00.07175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-10-26T01:28:00.071755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-10-26T01:28:00.073363Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-401000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-26T01:28:00.073425Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T01:28:00.073516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T01:28:00.074567Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-26T01:28:00.074738Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-26T01:28:00.078447Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-10-26T01:28:00.078679Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-26T01:28:02.572803Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-26T01:28:02.572937Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-401000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"warn","ts":"2023-10-26T01:28:02.573079Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-26T01:28:02.573295Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-26T01:28:02.66282Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-26T01:28:02.662887Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-26T01:28:02.662988Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-10-26T01:28:02.665334Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-10-26T01:28:02.665424Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-10-26T01:28:02.665434Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-401000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [990992cb323f] <==
	* {"level":"info","ts":"2023-10-26T01:28:05.755867Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-26T01:28:05.756486Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-10-26T01:28:05.756507Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-26T01:28:05.75652Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-26T01:28:05.756697Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:28:05.756767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-26T01:28:05.758941Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-26T01:28:05.759189Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-10-26T01:28:05.759207Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-10-26T01:28:05.759467Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-26T01:28:05.759633Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-26T01:28:06.778968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-26T01:28:06.779044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-26T01:28:06.779072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-10-26T01:28:06.779215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-10-26T01:28:06.779233Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-10-26T01:28:06.779241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-10-26T01:28:06.779247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-10-26T01:28:06.780992Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-401000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-26T01:28:06.781018Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T01:28:06.781035Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-26T01:28:06.781607Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-26T01:28:06.781672Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-26T01:28:06.78269Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-10-26T01:28:06.782692Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  01:28:14 up 50 min,  0 users,  load average: 1.56, 1.62, 1.41
	Linux kubernetes-upgrade-401000 6.4.16-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Oct 10 20:42:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [08fe0cfa12db] <==
	* I1026 01:28:08.389677       1 establishing_controller.go:76] Starting EstablishingController
	I1026 01:28:08.389695       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1026 01:28:08.389715       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1026 01:28:08.389727       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1026 01:28:08.389746       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1026 01:28:08.389785       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1026 01:28:08.469880       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 01:28:08.553086       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1026 01:28:08.553476       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 01:28:08.554096       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1026 01:28:08.554142       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 01:28:08.554165       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 01:28:08.554496       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 01:28:08.554732       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1026 01:28:08.554761       1 aggregator.go:166] initial CRD sync complete...
	I1026 01:28:08.554777       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 01:28:08.554798       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 01:28:08.554855       1 cache.go:39] Caches are synced for autoregister controller
	I1026 01:28:08.554787       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 01:28:09.392193       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 01:28:10.236944       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 01:28:10.249369       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 01:28:10.322571       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1026 01:28:10.405612       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 01:28:10.415881       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [4bdf393dd99e] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 01:28:02.653614       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 01:28:02.653752       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1026 01:28:02.653982       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [1f70d8e4ce32] <==
	* I1026 01:28:11.161488       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1026 01:28:11.161560       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I1026 01:28:11.161567       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I1026 01:28:11.208923       1 controllermanager.go:642] "Started controller" controller="serviceaccount-controller"
	I1026 01:28:11.209002       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I1026 01:28:11.209009       1 shared_informer.go:311] Waiting for caches to sync for service account
	I1026 01:28:11.257834       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I1026 01:28:11.257940       1 controller.go:169] "Starting ephemeral volume controller"
	I1026 01:28:11.257946       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I1026 01:28:11.310554       1 controllermanager.go:642] "Started controller" controller="endpointslice-controller"
	I1026 01:28:11.310832       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I1026 01:28:11.310844       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I1026 01:28:11.359042       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I1026 01:28:11.359093       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I1026 01:28:11.359188       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1026 01:28:11.360025       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I1026 01:28:11.360107       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1026 01:28:11.360250       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1026 01:28:11.360984       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I1026 01:28:11.361024       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1026 01:28:11.361062       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1026 01:28:11.361813       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1026 01:28:11.361875       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I1026 01:28:11.361886       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1026 01:28:11.361889       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	
	* 
	* ==> kube-controller-manager [7fe3d0f5cb39] <==
	* I1026 01:27:59.387600       1 serving.go:348] Generated self-signed cert in-memory
	I1026 01:27:59.714542       1 controllermanager.go:189] "Starting" version="v1.28.3"
	I1026 01:27:59.714730       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 01:27:59.717481       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1026 01:27:59.717579       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1026 01:27:59.718444       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1026 01:27:59.718532       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [19b8f2a32a0c] <==
	* I1026 01:27:59.601262       1 serving.go:348] Generated self-signed cert in-memory
	W1026 01:28:01.259248       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 01:28:01.259304       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 01:28:01.259316       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 01:28:01.259324       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 01:28:01.354822       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1026 01:28:01.354952       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 01:28:01.356543       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 01:28:01.356616       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 01:28:01.357043       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 01:28:01.357099       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 01:28:01.456914       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 01:28:02.566451       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1026 01:28:02.566654       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 01:28:02.568016       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1026 01:28:02.568510       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [bd270bf3f9bb] <==
	* I1026 01:28:06.261218       1 serving.go:348] Generated self-signed cert in-memory
	I1026 01:28:08.578935       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1026 01:28:08.579063       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 01:28:08.584562       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1026 01:28:08.585254       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1026 01:28:08.585839       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 01:28:08.584951       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 01:28:08.585891       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1026 01:28:08.585002       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 01:28:08.586090       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 01:28:08.586581       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 01:28:08.685895       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1026 01:28:08.686240       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1026 01:28:08.686261       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:05.058463   14602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b180301e1435b5f4a57030345811172-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-401000\" (UID: \"9b180301e1435b5f4a57030345811172\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-401000"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:05.058718   14602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b180301e1435b5f4a57030345811172-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-401000\" (UID: \"9b180301e1435b5f4a57030345811172\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-401000"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:05.058864   14602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b180301e1435b5f4a57030345811172-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-401000\" (UID: \"9b180301e1435b5f4a57030345811172\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-401000"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:05.058984   14602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b180301e1435b5f4a57030345811172-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-401000\" (UID: \"9b180301e1435b5f4a57030345811172\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-401000"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:05.059047   14602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b180301e1435b5f4a57030345811172-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-401000\" (UID: \"9b180301e1435b5f4a57030345811172\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-401000"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:05.059204   14602 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/13faba637808c584f3d527a244818757-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-401000\" (UID: \"13faba637808c584f3d527a244818757\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-401000"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:05.073525   14602 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-401000"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: E1026 01:28:05.073985   14602 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-401000"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:05.282377   14602 scope.go:117] "RemoveContainer" containerID="4bdf393dd99ee46e540c49e3d726f924b7553b456c1a8ff1f27f545d5c1ea3b5"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:05.284257   14602 scope.go:117] "RemoveContainer" containerID="10879cc5870cb6ab22b3ec58d131f90e2f341ac4f7d2dc69fba8790a9352298c"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:05.292392   14602 scope.go:117] "RemoveContainer" containerID="7fe3d0f5cb399b58bf6e37004dd89c1728bfeadfb480a66705d4e0d51450bd93"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:05.304625   14602 scope.go:117] "RemoveContainer" containerID="19b8f2a32a0c73b8403cca31b9be1421ded5c608f5605a325237965877014d2d"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: E1026 01:28:05.359311   14602 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-401000?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="800ms"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:05.487345   14602 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-401000"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: E1026 01:28:05.487722   14602 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-401000"
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: W1026 01:28:05.654394   14602 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: E1026 01:28:05.654570   14602 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: W1026 01:28:05.674648   14602 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: E1026 01:28:05.674757   14602 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Oct 26 01:28:05 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:05.986623   14602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27309db933e369d751b15af97e14d058425d6ee732b2dce30efbb27a2406bbd7"
	Oct 26 01:28:06 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:06.304921   14602 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-401000"
	Oct 26 01:28:08 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:08.572885   14602 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-401000"
	Oct 26 01:28:08 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:08.573200   14602 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-401000"
	Oct 26 01:28:08 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:08.754777   14602 apiserver.go:52] "Watching apiserver"
	Oct 26 01:28:08 kubernetes-upgrade-401000 kubelet[14602]: I1026 01:28:08.855138   14602 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-401000 -n kubernetes-upgrade-401000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-401000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-401000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-401000 describe pod storage-provisioner: exit status 1 (54.549454ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-401000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-401000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-401000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-401000: (2.592928146s)
--- FAIL: TestKubernetesUpgrade (576.90s)

                                                
                                    
x
+
TestMissingContainerUpgrade (48.93s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3569089603.exe start -p missing-upgrade-928000 --memory=2200 --driver=docker 
version_upgrade_test.go:322: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3569089603.exe start -p missing-upgrade-928000 --memory=2200 --driver=docker : exit status 70 (33.935399164s)

                                                
                                                
-- stdout --
	* [missing-upgrade-928000] minikube v1.9.0 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5939MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:18:10.356008743 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "missing-upgrade-928000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5939MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:18:25.225009601 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p missing-upgrade-928000", then "minikube start -p missing-upgrade-928000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.00 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 23.14 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 36.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 48.86 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 64.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 73.86 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 90.33 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 104.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 111.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 120.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 128.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 136.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 151.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 178.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 197.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 216.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 232.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 246.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 260.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 276.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 290.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 307.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 323.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 339.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 351.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 376.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 395.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 415.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 431.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 444.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 459.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 475.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 490.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 510.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 521.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 541.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:18:25.225009601 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3569089603.exe start -p missing-upgrade-928000 --memory=2200 --driver=docker 
version_upgrade_test.go:322: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3569089603.exe start -p missing-upgrade-928000 --memory=2200 --driver=docker : exit status 70 (4.099521963s)

                                                
                                                
-- stdout --
	* [missing-upgrade-928000] minikube v1.9.0 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-928000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3569089603.exe start -p missing-upgrade-928000 --memory=2200 --driver=docker 
version_upgrade_test.go:322: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3569089603.exe start -p missing-upgrade-928000 --memory=2200 --driver=docker : exit status 70 (4.142273147s)

                                                
                                                
-- stdout --
	* [missing-upgrade-928000] minikube v1.9.0 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-928000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:328: release start failed: exit status 70
panic.go:523: *** TestMissingContainerUpgrade FAILED at 2023-10-25 18:18:38.74606 -0700 PDT m=+2395.801149428
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-928000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-928000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80d4700a937edd2484fe27d4ab665b3771bebd641822b50d7833201bd0af893a",
	        "Created": "2023-10-26T01:18:18.484667682Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 198134,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:18:18.678974903Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/80d4700a937edd2484fe27d4ab665b3771bebd641822b50d7833201bd0af893a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80d4700a937edd2484fe27d4ab665b3771bebd641822b50d7833201bd0af893a/hostname",
	        "HostsPath": "/var/lib/docker/containers/80d4700a937edd2484fe27d4ab665b3771bebd641822b50d7833201bd0af893a/hosts",
	        "LogPath": "/var/lib/docker/containers/80d4700a937edd2484fe27d4ab665b3771bebd641822b50d7833201bd0af893a/80d4700a937edd2484fe27d4ab665b3771bebd641822b50d7833201bd0af893a-json.log",
	        "Name": "/missing-upgrade-928000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-928000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/09e9d0a0d7cbf77e9fa81c58151219b61ae0f4bf49f8d7ce4c645b721ab734c6-init/diff:/var/lib/docker/overlay2/d6672e613bb02bd7dd14300293f45cb78de98f1c7128082a2421ae037c0b13ec/diff:/var/lib/docker/overlay2/e2e73ca3080d9c529ffb10f2eea67603eb3dbcc6cb2535d3aace97e3693da9eb/diff:/var/lib/docker/overlay2/6af9671d3bbbabae727d23cdccb7d7daae0c709c407987827719699890b7a6e1/diff:/var/lib/docker/overlay2/1a430d4a29ae2363c762630bd97f48ae20b6d710481ac1fa15b9f31dfa6d99dc/diff:/var/lib/docker/overlay2/d5d3741d8008f10485f4663974a0e05286905dfc543d2865b3eb3dd189c2c0cd/diff:/var/lib/docker/overlay2/ac89e51629d1b778a6631ef623aa50bed1a54a8a272129557acfb260d052eb8a/diff:/var/lib/docker/overlay2/94cd1d40cd045b909ad583db3b34774f8174f2c4ef53751a3d62f881993e5a99/diff:/var/lib/docker/overlay2/516eea8fbd9f85f0f54038149fb8cda86e5f02567a88cde900feaa6120a631c1/diff:/var/lib/docker/overlay2/214b948f1ddde9a13a6dde4c9a13be42d1509e34ee5fd01b40bf65b1011b0d04/diff:/var/lib/docker/overlay2/5a9940
759548cf8f0d426d4c517e4b130a4d13f6bb7ebf79c939d6cd431da03c/diff:/var/lib/docker/overlay2/99ef3c12061c77b4378da50b5459c471630e8cbc30261f3ee769b90f17e447ad/diff:/var/lib/docker/overlay2/3f0b8f3d987df41619addaa9e3f2c3a084dfba202fcab8ef717e78cdb343672d/diff:/var/lib/docker/overlay2/7a16469da950e1a384c3e8d34d8e5e576bca76b02dd97ff172ed4c76147da020/diff:/var/lib/docker/overlay2/60a369390ac647a09ba1e0700e212285f29c8c5d9d7d153c1ff4495e6d5d4b68/diff:/var/lib/docker/overlay2/c4b15ba87e225248094d159cf593fb0b46304b0ee354d8161d37e00fd058d880/diff:/var/lib/docker/overlay2/037edf613fce2c2111e172c7f106e5364a4fd3ef227dd6496d9ca921dec30b06/diff:/var/lib/docker/overlay2/3fa60cf93f361d3f2de355a1c9c2a039292a0979a271b8147baa807469f7640d/diff:/var/lib/docker/overlay2/24a747d83169d0b648ca52b3aa6592463599595264c6adb513fd00cc1a6b8faa/diff:/var/lib/docker/overlay2/cb0ecb3ac56d83a7bc7d261856f61807e581c04980dab3dca511afd2b91cb6ad/diff:/var/lib/docker/overlay2/e53375eb16e3e671322acb01d14c7ba5ecd0572795f0b8000bdd8e32a87a1e18/diff:/var/lib/d
ocker/overlay2/1575a1bcceee782fd6cca7631af847096b6ddd72b2a4f5ca475742e01849c96b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/09e9d0a0d7cbf77e9fa81c58151219b61ae0f4bf49f8d7ce4c645b721ab734c6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/09e9d0a0d7cbf77e9fa81c58151219b61ae0f4bf49f8d7ce4c645b721ab734c6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/09e9d0a0d7cbf77e9fa81c58151219b61ae0f4bf49f8d7ce4c645b721ab734c6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-928000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-928000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-928000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-928000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-928000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5560c544fc97daaafe8ea306bf10a3e6f648481f7e2fd7d9cca1fd4e600103de",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58086"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58087"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58088"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5560c544fc97",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "693c99aa4e88bc45e630ed37ac1e6fd83eb77b3364596062724c9adb4d60b8af",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "17c5f4f69c914df0ca6389ad117bc7d5b2743fb9e93a6609b3f776160dc635c5",
	                    "EndpointID": "693c99aa4e88bc45e630ed37ac1e6fd83eb77b3364596062724c9adb4d60b8af",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-928000 -n missing-upgrade-928000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-928000 -n missing-upgrade-928000: exit status 6 (370.205306ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:18:39.157786   74884 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-928000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-928000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-928000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-928000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-928000: (2.23467497s)
--- FAIL: TestMissingContainerUpgrade (48.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (42.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3269964127.exe start -p stopped-upgrade-830000 --memory=2200 --vm-driver=docker 
E1025 18:20:09.564177   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3269964127.exe start -p stopped-upgrade-830000 --memory=2200 --vm-driver=docker : exit status 70 (32.878528606s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-830000] minikube v1.9.0 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig355244955
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5939MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:20:13.234286909 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-830000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (12 available), Memory=2200MB (5939MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:20:28.232170043 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-830000", then "minikube start -p stopped-upgrade-830000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 24.17 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 59.28 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 72.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 89.05 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 101.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 117.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 133.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 149.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 164.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 191.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 228.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 247.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 264.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 300.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 321.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 341.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 361.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 382.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 420.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 431.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 463.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 480.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 501.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 521.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 535.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:20:28.232170043 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3269964127.exe start -p stopped-upgrade-830000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3269964127.exe start -p stopped-upgrade-830000 --memory=2200 --vm-driver=docker : exit status 70 (3.928907903s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-830000] minikube v1.9.0 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3602376961
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-830000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3269964127.exe start -p stopped-upgrade-830000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3269964127.exe start -p stopped-upgrade-830000 --memory=2200 --vm-driver=docker : exit status 70 (3.966665969s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-830000] minikube v1.9.0 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3987514403
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-830000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:202: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (42.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (257.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-479000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-479000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m17.253537201s)

                                                
                                                
-- stdout --
	* [old-k8s-version-479000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-479000 in cluster old-k8s-version-479000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:32:53.979363   80988 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:32:53.979645   80988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:32:53.979651   80988 out.go:309] Setting ErrFile to fd 2...
	I1025 18:32:53.979655   80988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:32:53.979832   80988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 18:32:53.981302   80988 out.go:303] Setting JSON to false
	I1025 18:32:54.003332   80988 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":34341,"bootTime":1698249632,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 18:32:54.003439   80988 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 18:32:54.024871   80988 out.go:177] * [old-k8s-version-479000] minikube v1.31.2 on Darwin 14.0
	I1025 18:32:54.067307   80988 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 18:32:54.067394   80988 notify.go:220] Checking for updates...
	I1025 18:32:54.110285   80988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:32:54.133533   80988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 18:32:54.155448   80988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:32:54.176275   80988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 18:32:54.197266   80988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:32:54.218783   80988 config.go:182] Loaded profile config "kubenet-143000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:32:54.218905   80988 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 18:32:54.275714   80988 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 18:32:54.275852   80988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:32:54.378630   80988 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 01:32:54.367453155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:32:54.422094   80988 out.go:177] * Using the docker driver based on user configuration
	I1025 18:32:54.443058   80988 start.go:298] selected driver: docker
	I1025 18:32:54.443080   80988 start.go:902] validating driver "docker" against <nil>
	I1025 18:32:54.443095   80988 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:32:54.447086   80988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:32:54.549787   80988 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 01:32:54.537093247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:32:54.549958   80988 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 18:32:54.550188   80988 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:32:54.571457   80988 out.go:177] * Using Docker Desktop driver with root privileges
	I1025 18:32:54.592355   80988 cni.go:84] Creating CNI manager for ""
	I1025 18:32:54.592397   80988 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 18:32:54.592422   80988 start_flags.go:323] config:
	{Name:old-k8s-version-479000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-479000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:32:54.614340   80988 out.go:177] * Starting control plane node old-k8s-version-479000 in cluster old-k8s-version-479000
	I1025 18:32:54.636320   80988 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 18:32:54.657104   80988 out.go:177] * Pulling base image ...
	I1025 18:32:54.699349   80988 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 18:32:54.699417   80988 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 18:32:54.699423   80988 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 18:32:54.699438   80988 cache.go:56] Caching tarball of preloaded images
	I1025 18:32:54.699650   80988 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 18:32:54.699667   80988 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 18:32:54.699844   80988 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/config.json ...
	I1025 18:32:54.700458   80988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/config.json: {Name:mkad96154b9248eaa3f03a907cda0bd73596ed1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:32:54.751591   80988 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 18:32:54.751610   80988 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 18:32:54.751631   80988 cache.go:194] Successfully downloaded all kic artifacts
	I1025 18:32:54.751699   80988 start.go:365] acquiring machines lock for old-k8s-version-479000: {Name:mkc5126e3d24e31e0188d7ef4b9443b2bdba7109 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:32:54.751848   80988 start.go:369] acquired machines lock for "old-k8s-version-479000" in 135.151µs
	I1025 18:32:54.751877   80988 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-479000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-479000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:32:54.751969   80988 start.go:125] createHost starting for "" (driver="docker")
	I1025 18:32:54.773435   80988 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1025 18:32:54.773795   80988 start.go:159] libmachine.API.Create for "old-k8s-version-479000" (driver="docker")
	I1025 18:32:54.773844   80988 client.go:168] LocalClient.Create starting
	I1025 18:32:54.774017   80988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem
	I1025 18:32:54.774095   80988 main.go:141] libmachine: Decoding PEM data...
	I1025 18:32:54.774129   80988 main.go:141] libmachine: Parsing certificate...
	I1025 18:32:54.774222   80988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem
	I1025 18:32:54.774280   80988 main.go:141] libmachine: Decoding PEM data...
	I1025 18:32:54.774296   80988 main.go:141] libmachine: Parsing certificate...
	I1025 18:32:54.794472   80988 cli_runner.go:164] Run: docker network inspect old-k8s-version-479000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 18:32:54.845615   80988 cli_runner.go:211] docker network inspect old-k8s-version-479000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 18:32:54.845713   80988 network_create.go:281] running [docker network inspect old-k8s-version-479000] to gather additional debugging logs...
	I1025 18:32:54.845730   80988 cli_runner.go:164] Run: docker network inspect old-k8s-version-479000
	W1025 18:32:54.896634   80988 cli_runner.go:211] docker network inspect old-k8s-version-479000 returned with exit code 1
	I1025 18:32:54.896659   80988 network_create.go:284] error running [docker network inspect old-k8s-version-479000]: docker network inspect old-k8s-version-479000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-479000 not found
	I1025 18:32:54.896683   80988 network_create.go:286] output of [docker network inspect old-k8s-version-479000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-479000 not found
	
	** /stderr **
	I1025 18:32:54.896828   80988 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 18:32:54.949310   80988 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1025 18:32:54.949705   80988 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002424140}
	I1025 18:32:54.949721   80988 network_create.go:124] attempt to create docker network old-k8s-version-479000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1025 18:32:54.949784   80988 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-479000 old-k8s-version-479000
	W1025 18:32:54.999740   80988 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-479000 old-k8s-version-479000 returned with exit code 1
	W1025 18:32:54.999777   80988 network_create.go:149] failed to create docker network old-k8s-version-479000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-479000 old-k8s-version-479000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1025 18:32:54.999795   80988 network_create.go:116] failed to create docker network old-k8s-version-479000 192.168.58.0/24, will retry: subnet is taken
	I1025 18:32:55.001220   80988 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1025 18:32:55.001599   80988 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002343af0}
	I1025 18:32:55.001610   80988 network_create.go:124] attempt to create docker network old-k8s-version-479000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1025 18:32:55.001674   80988 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-479000 old-k8s-version-479000
	I1025 18:32:55.088993   80988 network_create.go:108] docker network old-k8s-version-479000 192.168.67.0/24 created
	I1025 18:32:55.089042   80988 kic.go:118] calculated static IP "192.168.67.2" for the "old-k8s-version-479000" container
	I1025 18:32:55.089168   80988 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 18:32:55.142003   80988 cli_runner.go:164] Run: docker volume create old-k8s-version-479000 --label name.minikube.sigs.k8s.io=old-k8s-version-479000 --label created_by.minikube.sigs.k8s.io=true
	I1025 18:32:55.197980   80988 oci.go:103] Successfully created a docker volume old-k8s-version-479000
	I1025 18:32:55.198102   80988 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-479000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-479000 --entrypoint /usr/bin/test -v old-k8s-version-479000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1025 18:32:55.644322   80988 oci.go:107] Successfully prepared a docker volume old-k8s-version-479000
	I1025 18:32:55.644360   80988 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 18:32:55.644372   80988 kic.go:191] Starting extracting preloaded images to volume ...
	I1025 18:32:55.644469   80988 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-479000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 18:32:58.169065   80988 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-479000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (2.524449431s)
	I1025 18:32:58.169089   80988 kic.go:200] duration metric: took 2.524639 seconds to extract preloaded images to volume
	I1025 18:32:58.169213   80988 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 18:32:58.272952   80988 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-479000 --name old-k8s-version-479000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-479000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-479000 --network old-k8s-version-479000 --ip 192.168.67.2 --volume old-k8s-version-479000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 18:32:58.561530   80988 cli_runner.go:164] Run: docker container inspect old-k8s-version-479000 --format={{.State.Running}}
	I1025 18:32:58.617218   80988 cli_runner.go:164] Run: docker container inspect old-k8s-version-479000 --format={{.State.Status}}
	I1025 18:32:58.676878   80988 cli_runner.go:164] Run: docker exec old-k8s-version-479000 stat /var/lib/dpkg/alternatives/iptables
	I1025 18:32:58.790568   80988 oci.go:144] the created container "old-k8s-version-479000" has a running status.
	I1025 18:32:58.790612   80988 kic.go:222] Creating ssh key for kic: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/old-k8s-version-479000/id_rsa...
	I1025 18:32:59.232360   80988 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/old-k8s-version-479000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 18:32:59.296039   80988 cli_runner.go:164] Run: docker container inspect old-k8s-version-479000 --format={{.State.Status}}
	I1025 18:32:59.350757   80988 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 18:32:59.350777   80988 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-479000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 18:32:59.446351   80988 cli_runner.go:164] Run: docker container inspect old-k8s-version-479000 --format={{.State.Status}}
	I1025 18:32:59.497866   80988 machine.go:88] provisioning docker machine ...
	I1025 18:32:59.497908   80988 ubuntu.go:169] provisioning hostname "old-k8s-version-479000"
	I1025 18:32:59.498012   80988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:32:59.549592   80988 main.go:141] libmachine: Using SSH client type: native
	I1025 18:32:59.549922   80988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 59751 <nil> <nil>}
	I1025 18:32:59.549947   80988 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-479000 && echo "old-k8s-version-479000" | sudo tee /etc/hostname
	I1025 18:32:59.682274   80988 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-479000
	
	I1025 18:32:59.682392   80988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:32:59.734242   80988 main.go:141] libmachine: Using SSH client type: native
	I1025 18:32:59.734545   80988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 59751 <nil> <nil>}
	I1025 18:32:59.734558   80988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-479000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-479000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-479000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:32:59.855458   80988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:32:59.855479   80988 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 18:32:59.855502   80988 ubuntu.go:177] setting up certificates
	I1025 18:32:59.855509   80988 provision.go:83] configureAuth start
	I1025 18:32:59.855584   80988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-479000
	I1025 18:32:59.907222   80988 provision.go:138] copyHostCerts
	I1025 18:32:59.907308   80988 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 18:32:59.907317   80988 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:32:59.907442   80988 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 18:32:59.907646   80988 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 18:32:59.907652   80988 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:32:59.907733   80988 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 18:32:59.907872   80988 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 18:32:59.907878   80988 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:32:59.908421   80988 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 18:32:59.908562   80988 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-479000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-479000]
	I1025 18:33:00.128955   80988 provision.go:172] copyRemoteCerts
	I1025 18:33:00.129011   80988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:33:00.129078   80988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:33:00.181469   80988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59751 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/old-k8s-version-479000/id_rsa Username:docker}
	I1025 18:33:00.271326   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 18:33:00.294845   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1025 18:33:00.317839   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 18:33:00.341542   80988 provision.go:86] duration metric: configureAuth took 486.00167ms
	I1025 18:33:00.341558   80988 ubuntu.go:193] setting minikube options for container-runtime
	I1025 18:33:00.341695   80988 config.go:182] Loaded profile config "old-k8s-version-479000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1025 18:33:00.341768   80988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:33:00.393548   80988 main.go:141] libmachine: Using SSH client type: native
	I1025 18:33:00.393847   80988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 59751 <nil> <nil>}
	I1025 18:33:00.393860   80988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:33:00.517513   80988 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 18:33:00.517529   80988 ubuntu.go:71] root file system type: overlay
	I1025 18:33:00.517629   80988 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:33:00.517717   80988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:33:00.568427   80988 main.go:141] libmachine: Using SSH client type: native
	I1025 18:33:00.568769   80988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 59751 <nil> <nil>}
	I1025 18:33:00.568826   80988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:33:00.701772   80988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:33:00.701888   80988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:33:00.753851   80988 main.go:141] libmachine: Using SSH client type: native
	I1025 18:33:00.754134   80988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 59751 <nil> <nil>}
	I1025 18:33:00.754148   80988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:33:01.368442   80988 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-09-04 12:30:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-10-26 01:33:00.698872182 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 18:33:01.368473   80988 machine.go:91] provisioned docker machine in 1.87052983s
	I1025 18:33:01.368480   80988 client.go:171] LocalClient.Create took 6.594431418s
	I1025 18:33:01.368495   80988 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-479000" took 6.594505758s
	I1025 18:33:01.368502   80988 start.go:300] post-start starting for "old-k8s-version-479000" (driver="docker")
	I1025 18:33:01.368512   80988 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:33:01.368580   80988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:33:01.368643   80988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:33:01.421509   80988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59751 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/old-k8s-version-479000/id_rsa Username:docker}
	I1025 18:33:01.511333   80988 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:33:01.515805   80988 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 18:33:01.515829   80988 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 18:33:01.515837   80988 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 18:33:01.515847   80988 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 18:33:01.515859   80988 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 18:33:01.515951   80988 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 18:33:01.516122   80988 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 18:33:01.516307   80988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:33:01.525398   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:33:01.548314   80988 start.go:303] post-start completed in 179.798928ms
	I1025 18:33:01.548849   80988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-479000
	I1025 18:33:01.600673   80988 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/config.json ...
	I1025 18:33:01.601094   80988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 18:33:01.601155   80988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:33:01.653328   80988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59751 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/old-k8s-version-479000/id_rsa Username:docker}
	I1025 18:33:01.741822   80988 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 18:33:01.747350   80988 start.go:128] duration metric: createHost completed in 6.995155783s
	I1025 18:33:01.747370   80988 start.go:83] releasing machines lock for "old-k8s-version-479000", held for 6.995302968s
	I1025 18:33:01.747448   80988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-479000
	I1025 18:33:01.798891   80988 ssh_runner.go:195] Run: cat /version.json
	I1025 18:33:01.798934   80988 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:33:01.798957   80988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:33:01.799001   80988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:33:01.859505   80988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59751 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/old-k8s-version-479000/id_rsa Username:docker}
	I1025 18:33:01.859604   80988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59751 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/old-k8s-version-479000/id_rsa Username:docker}
	I1025 18:33:01.945387   80988 ssh_runner.go:195] Run: systemctl --version
	I1025 18:33:02.051731   80988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 18:33:02.057734   80988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 18:33:02.083223   80988 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 18:33:02.083319   80988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 18:33:02.100820   80988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 18:33:02.117915   80988 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 18:33:02.117937   80988 start.go:472] detecting cgroup driver to use...
	I1025 18:33:02.117955   80988 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:33:02.118072   80988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:33:02.134671   80988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1025 18:33:02.145411   80988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:33:02.156055   80988 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:33:02.156133   80988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:33:02.167719   80988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:33:02.178431   80988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:33:02.189889   80988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:33:02.200871   80988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:33:02.211246   80988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:33:02.222154   80988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:33:02.231580   80988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:33:02.240824   80988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:33:02.304777   80988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:33:02.395616   80988 start.go:472] detecting cgroup driver to use...
	I1025 18:33:02.395635   80988 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:33:02.395698   80988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:33:02.414282   80988 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 18:33:02.414377   80988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:33:02.426896   80988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:33:02.446457   80988 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:33:02.451682   80988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:33:02.462541   80988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:33:02.483856   80988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:33:02.573208   80988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:33:02.670329   80988 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:33:02.670434   80988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:33:02.688992   80988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:33:02.771595   80988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:33:03.026436   80988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:33:03.051902   80988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:33:03.123333   80988 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
	I1025 18:33:03.123420   80988 cli_runner.go:164] Run: docker exec -t old-k8s-version-479000 dig +short host.docker.internal
	I1025 18:33:03.246721   80988 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 18:33:03.246805   80988 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 18:33:03.251926   80988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:33:03.264351   80988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:33:03.316038   80988 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 18:33:03.316107   80988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:33:03.338721   80988 docker.go:693] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1025 18:33:03.338736   80988 docker.go:699] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I1025 18:33:03.338796   80988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 18:33:03.348908   80988 ssh_runner.go:195] Run: which lz4
	I1025 18:33:03.354371   80988 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 18:33:03.359519   80988 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 18:33:03.359560   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I1025 18:33:09.124365   80988 docker.go:657] Took 5.769873 seconds to copy over tarball
	I1025 18:33:09.124435   80988 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 18:33:11.575908   80988 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.451383193s)
	I1025 18:33:11.575924   80988 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 18:33:11.627864   80988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 18:33:11.638675   80988 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I1025 18:33:11.661138   80988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:33:11.727833   80988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:33:12.361002   80988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:33:12.382479   80988 docker.go:693] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1025 18:33:12.382494   80988 docker.go:699] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I1025 18:33:12.382505   80988 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 18:33:12.390682   80988 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:33:12.390769   80988 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1025 18:33:12.390800   80988 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1025 18:33:12.390834   80988 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:33:12.391321   80988 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:33:12.391506   80988 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:33:12.392435   80988 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:33:12.392499   80988 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1025 18:33:12.400279   80988 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:33:12.400706   80988 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:33:12.401964   80988 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:33:12.402131   80988 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1025 18:33:12.402048   80988 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:33:12.402415   80988 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:33:12.402386   80988 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1025 18:33:12.402596   80988 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1025 18:33:13.088923   80988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:33:13.111990   80988 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1025 18:33:13.112034   80988 docker.go:318] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:33:13.112085   80988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:33:13.134360   80988 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1025 18:33:13.213782   80988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:33:13.236364   80988 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1025 18:33:13.236399   80988 docker.go:318] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:33:13.236468   80988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:33:13.258674   80988 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1025 18:33:13.877939   80988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1025 18:33:13.901265   80988 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1025 18:33:13.901294   80988 docker.go:318] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1025 18:33:13.901356   80988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I1025 18:33:13.924368   80988 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1025 18:33:13.976430   80988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:33:14.190379   80988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:33:14.213036   80988 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1025 18:33:14.213078   80988 docker.go:318] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:33:14.213148   80988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:33:14.235261   80988 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1025 18:33:14.489632   80988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1025 18:33:14.512441   80988 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1025 18:33:14.512481   80988 docker.go:318] Removing image: registry.k8s.io/coredns:1.6.2
	I1025 18:33:14.512546   80988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I1025 18:33:14.534925   80988 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1025 18:33:14.803497   80988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:33:14.825704   80988 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1025 18:33:14.825732   80988 docker.go:318] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:33:14.825778   80988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:33:14.847147   80988 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1025 18:33:15.122463   80988 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1025 18:33:15.145360   80988 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1025 18:33:15.145385   80988 docker.go:318] Removing image: registry.k8s.io/pause:3.1
	I1025 18:33:15.145452   80988 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I1025 18:33:15.166320   80988 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1025 18:33:15.166366   80988 cache_images.go:92] LoadImages completed in 2.783766426s
	W1025 18:33:15.166418   80988 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I1025 18:33:15.166488   80988 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:33:15.220589   80988 cni.go:84] Creating CNI manager for ""
	I1025 18:33:15.220607   80988 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 18:33:15.220631   80988 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 18:33:15.220661   80988 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-479000 NodeName:old-k8s-version-479000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1025 18:33:15.220780   80988 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-479000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-479000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:33:15.220858   80988 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-479000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-479000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 18:33:15.220919   80988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1025 18:33:15.230738   80988 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:33:15.230811   80988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 18:33:15.240538   80988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I1025 18:33:15.258182   80988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:33:15.275835   80988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I1025 18:33:15.293527   80988 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1025 18:33:15.298184   80988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:33:15.310041   80988 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000 for IP: 192.168.67.2
	I1025 18:33:15.310062   80988 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:33:15.310229   80988 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 18:33:15.310289   80988 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 18:33:15.310330   80988 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/client.key
	I1025 18:33:15.310344   80988 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/client.crt with IP's: []
	I1025 18:33:15.711923   80988 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/client.crt ...
	I1025 18:33:15.711941   80988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/client.crt: {Name:mk75e02a0990e49d98d16d9761310f8c3b274942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:33:15.712251   80988 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/client.key ...
	I1025 18:33:15.712259   80988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/client.key: {Name:mk5e03f62ffc15bf5973af58efc90cd11c9751ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:33:15.712493   80988 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.key.c7fa3a9e
	I1025 18:33:15.712510   80988 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 18:33:15.804350   80988 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.crt.c7fa3a9e ...
	I1025 18:33:15.804362   80988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.crt.c7fa3a9e: {Name:mke547eb8324beb02766740c6c5c7b0719ec7acf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:33:15.804624   80988 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.key.c7fa3a9e ...
	I1025 18:33:15.804632   80988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.key.c7fa3a9e: {Name:mk2badd7d7ba697f88806230120ac20ba374f3b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:33:15.804844   80988 certs.go:337] copying /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.crt
	I1025 18:33:15.805027   80988 certs.go:341] copying /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.key
	I1025 18:33:15.805191   80988 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/proxy-client.key
	I1025 18:33:15.805205   80988 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/proxy-client.crt with IP's: []
	I1025 18:33:15.920394   80988 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/proxy-client.crt ...
	I1025 18:33:15.920405   80988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/proxy-client.crt: {Name:mk675b7160007201bc9e59a8a61dc1772333de4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:33:15.920678   80988 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/proxy-client.key ...
	I1025 18:33:15.920686   80988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/proxy-client.key: {Name:mk47bc066efb1e7a50468dedabd2cd9dc4be8b8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:33:15.921073   80988 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 18:33:15.921127   80988 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 18:33:15.921139   80988 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 18:33:15.921173   80988 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 18:33:15.921205   80988 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:33:15.921237   80988 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 18:33:15.921305   80988 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:33:15.921856   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 18:33:15.946736   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 18:33:15.970636   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 18:33:15.993917   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 18:33:16.017541   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:33:16.040615   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 18:33:16.064346   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:33:16.087506   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:33:16.111085   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 18:33:16.134348   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 18:33:16.157807   80988 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:33:16.181744   80988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 18:33:16.199562   80988 ssh_runner.go:195] Run: openssl version
	I1025 18:33:16.206044   80988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 18:33:16.216668   80988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 18:33:16.221347   80988 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:33:16.221394   80988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 18:33:16.228602   80988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 18:33:16.239031   80988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 18:33:16.249807   80988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 18:33:16.254266   80988 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:33:16.254315   80988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 18:33:16.261205   80988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:33:16.271591   80988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:33:16.281772   80988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:33:16.286433   80988 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:33:16.286498   80988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:33:16.294007   80988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:33:16.304154   80988 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 18:33:16.308677   80988 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1025 18:33:16.308724   80988 kubeadm.go:404] StartCluster: {Name:old-k8s-version-479000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-479000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:33:16.308832   80988 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:33:16.329066   80988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 18:33:16.338945   80988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:33:16.348479   80988 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 18:33:16.348535   80988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:33:16.358198   80988 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:33:16.358225   80988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 18:33:16.410549   80988 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1025 18:33:16.410592   80988 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 18:33:16.672387   80988 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:33:16.672484   80988 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:33:16.672568   80988 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:33:16.864163   80988 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:33:16.864844   80988 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:33:16.871966   80988 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1025 18:33:16.945692   80988 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:33:16.970052   80988 out.go:204]   - Generating certificates and keys ...
	I1025 18:33:16.970137   80988 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 18:33:16.970241   80988 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 18:33:17.045896   80988 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 18:33:17.218031   80988 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1025 18:33:17.341384   80988 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1025 18:33:17.399187   80988 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1025 18:33:17.577307   80988 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1025 18:33:17.577419   80988 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-479000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1025 18:33:17.760120   80988 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1025 18:33:17.760241   80988 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-479000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1025 18:33:18.036580   80988 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 18:33:18.248937   80988 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 18:33:18.447449   80988 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1025 18:33:18.447656   80988 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:33:18.672821   80988 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:33:18.888402   80988 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:33:18.996950   80988 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:33:19.156349   80988 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:33:19.156878   80988 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:33:19.178604   80988 out.go:204]   - Booting up control plane ...
	I1025 18:33:19.178678   80988 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:33:19.178755   80988 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:33:19.178821   80988 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:33:19.178894   80988 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:33:19.179045   80988 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:33:59.166397   80988 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1025 18:33:59.167181   80988 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:33:59.167474   80988 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:34:04.167979   80988 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:34:04.168169   80988 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:34:14.169429   80988 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:34:14.169619   80988 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:34:34.172549   80988 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:34:34.172852   80988 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:35:14.174080   80988 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:35:14.174320   80988 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:35:14.174333   80988 kubeadm.go:322] 
	I1025 18:35:14.174375   80988 kubeadm.go:322] Unfortunately, an error has occurred:
	I1025 18:35:14.174417   80988 kubeadm.go:322] 	timed out waiting for the condition
	I1025 18:35:14.174424   80988 kubeadm.go:322] 
	I1025 18:35:14.174459   80988 kubeadm.go:322] This error is likely caused by:
	I1025 18:35:14.174493   80988 kubeadm.go:322] 	- The kubelet is not running
	I1025 18:35:14.174597   80988 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 18:35:14.174606   80988 kubeadm.go:322] 
	I1025 18:35:14.174703   80988 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 18:35:14.174735   80988 kubeadm.go:322] 	- 'systemctl status kubelet'
	I1025 18:35:14.174769   80988 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I1025 18:35:14.174775   80988 kubeadm.go:322] 
	I1025 18:35:14.174902   80988 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 18:35:14.175034   80988 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1025 18:35:14.175150   80988 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I1025 18:35:14.175234   80988 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I1025 18:35:14.175352   80988 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I1025 18:35:14.175407   80988 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I1025 18:35:14.177351   80988 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1025 18:35:14.177429   80988 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1025 18:35:14.177528   80988 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I1025 18:35:14.177618   80988 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:35:14.177689   80988 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 18:35:14.177748   80988 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W1025 18:35:14.177832   80988 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-479000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-479000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-479000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-479000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1025 18:35:14.177863   80988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1025 18:35:14.595022   80988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:35:14.607292   80988 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 18:35:14.607349   80988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:35:14.617161   80988 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:35:14.617196   80988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 18:35:14.670227   80988 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1025 18:35:14.670298   80988 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 18:35:14.922296   80988 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:35:14.922375   80988 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:35:14.922456   80988 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:35:15.106762   80988 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:35:15.107638   80988 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:35:15.114475   80988 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1025 18:35:15.189936   80988 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:35:15.211612   80988 out.go:204]   - Generating certificates and keys ...
	I1025 18:35:15.211695   80988 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 18:35:15.211766   80988 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 18:35:15.211827   80988 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 18:35:15.211871   80988 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1025 18:35:15.211925   80988 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 18:35:15.212013   80988 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1025 18:35:15.212072   80988 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1025 18:35:15.212131   80988 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1025 18:35:15.212231   80988 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 18:35:15.212322   80988 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 18:35:15.212359   80988 kubeadm.go:322] [certs] Using the existing "sa" key
	I1025 18:35:15.212425   80988 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:35:15.245361   80988 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:35:15.409936   80988 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:35:15.510588   80988 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:35:15.598865   80988 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:35:15.599660   80988 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:35:15.621350   80988 out.go:204]   - Booting up control plane ...
	I1025 18:35:15.621436   80988 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:35:15.621516   80988 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:35:15.621579   80988 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:35:15.621644   80988 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:35:15.621774   80988 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:35:55.609710   80988 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1025 18:35:55.610362   80988 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:35:55.610552   80988 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:36:00.612358   80988 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:36:00.612567   80988 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:36:10.614051   80988 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:36:10.614266   80988 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:36:30.616749   80988 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:36:30.616976   80988 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:37:10.619615   80988 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:37:10.619996   80988 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:37:10.620013   80988 kubeadm.go:322] 
	I1025 18:37:10.620051   80988 kubeadm.go:322] Unfortunately, an error has occurred:
	I1025 18:37:10.620095   80988 kubeadm.go:322] 	timed out waiting for the condition
	I1025 18:37:10.620109   80988 kubeadm.go:322] 
	I1025 18:37:10.620174   80988 kubeadm.go:322] This error is likely caused by:
	I1025 18:37:10.620201   80988 kubeadm.go:322] 	- The kubelet is not running
	I1025 18:37:10.620285   80988 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 18:37:10.620296   80988 kubeadm.go:322] 
	I1025 18:37:10.620403   80988 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 18:37:10.620434   80988 kubeadm.go:322] 	- 'systemctl status kubelet'
	I1025 18:37:10.620465   80988 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I1025 18:37:10.620474   80988 kubeadm.go:322] 
	I1025 18:37:10.620595   80988 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 18:37:10.620723   80988 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1025 18:37:10.620826   80988 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I1025 18:37:10.620876   80988 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I1025 18:37:10.620932   80988 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I1025 18:37:10.620962   80988 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I1025 18:37:10.622942   80988 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1025 18:37:10.623022   80988 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1025 18:37:10.623132   80988 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I1025 18:37:10.623218   80988 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:37:10.623300   80988 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 18:37:10.623366   80988 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1025 18:37:10.623392   80988 kubeadm.go:406] StartCluster complete in 3m54.307641011s
	I1025 18:37:10.623485   80988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:37:10.644434   80988 logs.go:284] 0 containers: []
	W1025 18:37:10.644448   80988 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:37:10.644520   80988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:37:10.663774   80988 logs.go:284] 0 containers: []
	W1025 18:37:10.663788   80988 logs.go:286] No container was found matching "etcd"
	I1025 18:37:10.663857   80988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:37:10.684779   80988 logs.go:284] 0 containers: []
	W1025 18:37:10.684792   80988 logs.go:286] No container was found matching "coredns"
	I1025 18:37:10.684863   80988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:37:10.704572   80988 logs.go:284] 0 containers: []
	W1025 18:37:10.704585   80988 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:37:10.704673   80988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:37:10.724210   80988 logs.go:284] 0 containers: []
	W1025 18:37:10.724224   80988 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:37:10.724291   80988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:37:10.744729   80988 logs.go:284] 0 containers: []
	W1025 18:37:10.744742   80988 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:37:10.744806   80988 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:37:10.765613   80988 logs.go:284] 0 containers: []
	W1025 18:37:10.765630   80988 logs.go:286] No container was found matching "kindnet"
	I1025 18:37:10.765645   80988 logs.go:123] Gathering logs for kubelet ...
	I1025 18:37:10.765660   80988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:37:10.804341   80988 logs.go:123] Gathering logs for dmesg ...
	I1025 18:37:10.804355   80988 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:37:10.819151   80988 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:37:10.819172   80988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:37:10.877192   80988 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:37:10.877205   80988 logs.go:123] Gathering logs for Docker ...
	I1025 18:37:10.877212   80988 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:37:10.894899   80988 logs.go:123] Gathering logs for container status ...
	I1025 18:37:10.894915   80988 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1025 18:37:10.957012   80988 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1025 18:37:10.957033   80988 out.go:239] * 
	* 
	W1025 18:37:10.957091   80988 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 18:37:10.957107   80988 out.go:239] * 
	* 
	W1025 18:37:10.957785   80988 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:37:11.019607   80988 out.go:177] 
	W1025 18:37:11.061852   80988 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 18:37:11.061922   80988 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1025 18:37:11.061961   80988 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1025 18:37:11.103678   80988 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-479000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-479000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-479000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69",
	        "Created": "2023-10-26T01:32:58.324650138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308034,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:32:58.551798864Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hosts",
	        "LogPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69-json.log",
	        "Name": "/old-k8s-version-479000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-479000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-479000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-479000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-479000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-479000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "61ef09e73509324069dc4e60373e50da7bb9f4aa73cc234fc24dd5fc9d713013",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59751"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59747"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59748"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59749"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59750"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/61ef09e73509",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-479000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5e3f3c28e57c",
	                        "old-k8s-version-479000"
	                    ],
	                    "NetworkID": "e1c286b1eee5e63f7c876927f11c7e5f513aa124ea1227ec48978fbb98cbe026",
	                    "EndpointID": "e6325b666370a2ad46be3ad7725f8a9875c0e80e153924797402687b3b6529c1",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000: exit status 6 (405.594314ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:37:11.670048   82083 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-479000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-479000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (257.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-479000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-479000 create -f testdata/busybox.yaml: exit status 1 (40.554255ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-479000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-479000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-479000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69",
	        "Created": "2023-10-26T01:32:58.324650138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308034,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:32:58.551798864Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hosts",
	        "LogPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69-json.log",
	        "Name": "/old-k8s-version-479000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-479000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-479000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-479000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-479000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-479000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "61ef09e73509324069dc4e60373e50da7bb9f4aa73cc234fc24dd5fc9d713013",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59751"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59747"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59748"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59749"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59750"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/61ef09e73509",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-479000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5e3f3c28e57c",
	                        "old-k8s-version-479000"
	                    ],
	                    "NetworkID": "e1c286b1eee5e63f7c876927f11c7e5f513aa124ea1227ec48978fbb98cbe026",
	                    "EndpointID": "e6325b666370a2ad46be3ad7725f8a9875c0e80e153924797402687b3b6529c1",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000: exit status 6 (385.306071ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:37:12.151580   82098 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-479000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-479000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-479000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-479000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69",
	        "Created": "2023-10-26T01:32:58.324650138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308034,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:32:58.551798864Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hosts",
	        "LogPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69-json.log",
	        "Name": "/old-k8s-version-479000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-479000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-479000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-479000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-479000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-479000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "61ef09e73509324069dc4e60373e50da7bb9f4aa73cc234fc24dd5fc9d713013",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59751"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59747"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59748"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59749"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59750"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/61ef09e73509",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-479000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5e3f3c28e57c",
	                        "old-k8s-version-479000"
	                    ],
	                    "NetworkID": "e1c286b1eee5e63f7c876927f11c7e5f513aa124ea1227ec48978fbb98cbe026",
	                    "EndpointID": "e6325b666370a2ad46be3ad7725f8a9875c0e80e153924797402687b3b6529c1",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000: exit status 6 (408.121663ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:37:12.614058   82110 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-479000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-479000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-479000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1025 18:37:20.956216   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:37:20.961986   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:37:20.972155   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:37:20.992766   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:37:21.032903   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:37:21.113407   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:37:21.273757   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:37:21.594534   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:37:22.235283   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:37:23.515574   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:37:24.532457   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:37:26.076798   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:37:31.197117   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:37:31.288403   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:37:35.291844   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 18:37:41.437626   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:38:01.919174   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:38:06.911193   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:06.916944   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:06.927369   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:06.947497   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:06.988454   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:07.068745   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:07.229219   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:07.549597   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:08.190094   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:09.470480   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:12.022622   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:38:12.030813   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:17.151781   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:23.884709   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/calico-143000/client.crt: no such file or directory
E1025 18:38:27.392289   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:42.880965   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:38:46.455578   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:38:47.873715   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:38:51.566848   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/calico-143000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-479000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m55.604597178s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-479000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-479000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-479000 describe deploy/metrics-server -n kube-system: exit status 1 (35.630593ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-479000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-479000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-479000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-479000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69",
	        "Created": "2023-10-26T01:32:58.324650138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308034,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:32:58.551798864Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hosts",
	        "LogPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69-json.log",
	        "Name": "/old-k8s-version-479000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-479000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-479000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-479000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-479000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-479000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "61ef09e73509324069dc4e60373e50da7bb9f4aa73cc234fc24dd5fc9d713013",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59751"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59747"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59748"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59749"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59750"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/61ef09e73509",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-479000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5e3f3c28e57c",
	                        "old-k8s-version-479000"
	                    ],
	                    "NetworkID": "e1c286b1eee5e63f7c876927f11c7e5f513aa124ea1227ec48978fbb98cbe026",
	                    "EndpointID": "e6325b666370a2ad46be3ad7725f8a9875c0e80e153924797402687b3b6529c1",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000: exit status 6 (384.926879ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:39:08.697029   82151 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-479000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-479000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (510.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-479000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E1025 18:39:12.880631   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:39:28.633314   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:39:28.836137   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:39:40.566116   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:39:47.447944   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:40:00.521167   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
E1025 18:40:04.803968   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:40:15.133745   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-479000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m27.058384376s)

                                                
                                                
-- stdout --
	* [old-k8s-version-479000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-479000 in cluster old-k8s-version-479000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-479000" ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:39:10.767343   82181 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:39:10.767525   82181 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:10.767530   82181 out.go:309] Setting ErrFile to fd 2...
	I1025 18:39:10.767534   82181 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:10.767707   82181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 18:39:10.769074   82181 out.go:303] Setting JSON to false
	I1025 18:39:10.791112   82181 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":34718,"bootTime":1698249632,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 18:39:10.791210   82181 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 18:39:10.812642   82181 out.go:177] * [old-k8s-version-479000] minikube v1.31.2 on Darwin 14.0
	I1025 18:39:10.856373   82181 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 18:39:10.856439   82181 notify.go:220] Checking for updates...
	I1025 18:39:10.898977   82181 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:39:10.920180   82181 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 18:39:10.940999   82181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:39:10.962221   82181 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 18:39:10.983332   82181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:39:11.006945   82181 config.go:182] Loaded profile config "old-k8s-version-479000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1025 18:39:11.029367   82181 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1025 18:39:11.051310   82181 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 18:39:11.110117   82181 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 18:39:11.110250   82181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:39:11.210928   82181 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 01:39:11.199958429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:39:11.253417   82181 out.go:177] * Using the docker driver based on existing profile
	I1025 18:39:11.274494   82181 start.go:298] selected driver: docker
	I1025 18:39:11.274511   82181 start.go:902] validating driver "docker" against &{Name:old-k8s-version-479000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-479000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:39:11.274593   82181 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:39:11.278522   82181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:39:11.382986   82181 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 01:39:11.372058917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:39:11.383207   82181 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:39:11.383237   82181 cni.go:84] Creating CNI manager for ""
	I1025 18:39:11.383249   82181 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 18:39:11.383261   82181 start_flags.go:323] config:
	{Name:old-k8s-version-479000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-479000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:39:11.426560   82181 out.go:177] * Starting control plane node old-k8s-version-479000 in cluster old-k8s-version-479000
	I1025 18:39:11.463721   82181 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 18:39:11.485607   82181 out.go:177] * Pulling base image ...
	I1025 18:39:11.506585   82181 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 18:39:11.506651   82181 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 18:39:11.506682   82181 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 18:39:11.506710   82181 cache.go:56] Caching tarball of preloaded images
	I1025 18:39:11.506921   82181 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 18:39:11.506940   82181 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 18:39:11.507797   82181 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/config.json ...
	I1025 18:39:11.561012   82181 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 18:39:11.561037   82181 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 18:39:11.561065   82181 cache.go:194] Successfully downloaded all kic artifacts
	I1025 18:39:11.561110   82181 start.go:365] acquiring machines lock for old-k8s-version-479000: {Name:mkc5126e3d24e31e0188d7ef4b9443b2bdba7109 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:39:11.561202   82181 start.go:369] acquired machines lock for "old-k8s-version-479000" in 71.111µs
	I1025 18:39:11.561230   82181 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:39:11.561239   82181 fix.go:54] fixHost starting: 
	I1025 18:39:11.561487   82181 cli_runner.go:164] Run: docker container inspect old-k8s-version-479000 --format={{.State.Status}}
	I1025 18:39:11.612872   82181 fix.go:102] recreateIfNeeded on old-k8s-version-479000: state=Stopped err=<nil>
	W1025 18:39:11.612923   82181 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 18:39:11.634859   82181 out.go:177] * Restarting existing docker container for "old-k8s-version-479000" ...
	I1025 18:39:11.677421   82181 cli_runner.go:164] Run: docker start old-k8s-version-479000
	I1025 18:39:11.955575   82181 cli_runner.go:164] Run: docker container inspect old-k8s-version-479000 --format={{.State.Status}}
	I1025 18:39:12.013826   82181 kic.go:427] container "old-k8s-version-479000" state is running.
	I1025 18:39:12.014568   82181 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-479000
	I1025 18:39:12.074542   82181 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/config.json ...
	I1025 18:39:12.074971   82181 machine.go:88] provisioning docker machine ...
	I1025 18:39:12.074999   82181 ubuntu.go:169] provisioning hostname "old-k8s-version-479000"
	I1025 18:39:12.075089   82181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:39:12.142302   82181 main.go:141] libmachine: Using SSH client type: native
	I1025 18:39:12.142772   82181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 59994 <nil> <nil>}
	I1025 18:39:12.142786   82181 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-479000 && echo "old-k8s-version-479000" | sudo tee /etc/hostname
	I1025 18:39:12.144715   82181 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 18:39:15.278513   82181 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-479000
	
	I1025 18:39:15.278599   82181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:39:15.362606   82181 main.go:141] libmachine: Using SSH client type: native
	I1025 18:39:15.362886   82181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 59994 <nil> <nil>}
	I1025 18:39:15.362899   82181 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-479000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-479000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-479000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:39:15.484254   82181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:39:15.484289   82181 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 18:39:15.484308   82181 ubuntu.go:177] setting up certificates
	I1025 18:39:15.484317   82181 provision.go:83] configureAuth start
	I1025 18:39:15.484403   82181 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-479000
	I1025 18:39:15.536820   82181 provision.go:138] copyHostCerts
	I1025 18:39:15.536912   82181 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 18:39:15.536928   82181 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:39:15.537050   82181 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 18:39:15.537330   82181 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 18:39:15.537338   82181 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:39:15.537421   82181 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 18:39:15.537593   82181 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 18:39:15.537599   82181 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:39:15.537663   82181 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 18:39:15.537811   82181 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-479000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-479000]
	I1025 18:39:15.601494   82181 provision.go:172] copyRemoteCerts
	I1025 18:39:15.601550   82181 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:39:15.601613   82181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:39:15.652941   82181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59994 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/old-k8s-version-479000/id_rsa Username:docker}
	I1025 18:39:15.743965   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 18:39:15.766918   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1025 18:39:15.791237   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 18:39:15.814447   82181 provision.go:86] duration metric: configureAuth took 330.105717ms
	I1025 18:39:15.814461   82181 ubuntu.go:193] setting minikube options for container-runtime
	I1025 18:39:15.814609   82181 config.go:182] Loaded profile config "old-k8s-version-479000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1025 18:39:15.814680   82181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:39:15.866827   82181 main.go:141] libmachine: Using SSH client type: native
	I1025 18:39:15.867110   82181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 59994 <nil> <nil>}
	I1025 18:39:15.867120   82181 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:39:15.990201   82181 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 18:39:15.990220   82181 ubuntu.go:71] root file system type: overlay
	I1025 18:39:15.990348   82181 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:39:15.990441   82181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:39:16.043773   82181 main.go:141] libmachine: Using SSH client type: native
	I1025 18:39:16.044051   82181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 59994 <nil> <nil>}
	I1025 18:39:16.044105   82181 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:39:16.175425   82181 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:39:16.175537   82181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:39:16.227828   82181 main.go:141] libmachine: Using SSH client type: native
	I1025 18:39:16.228125   82181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 59994 <nil> <nil>}
	I1025 18:39:16.228139   82181 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:39:16.354799   82181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:39:16.354817   82181 machine.go:91] provisioned docker machine in 4.279709137s
	I1025 18:39:16.354823   82181 start.go:300] post-start starting for "old-k8s-version-479000" (driver="docker")
	I1025 18:39:16.354846   82181 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:39:16.354923   82181 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:39:16.354982   82181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:39:16.406769   82181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59994 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/old-k8s-version-479000/id_rsa Username:docker}
	I1025 18:39:16.495290   82181 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:39:16.499694   82181 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 18:39:16.499719   82181 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 18:39:16.499727   82181 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 18:39:16.499734   82181 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 18:39:16.499745   82181 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 18:39:16.499854   82181 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 18:39:16.500018   82181 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 18:39:16.500201   82181 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:39:16.509553   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:39:16.532128   82181 start.go:303] post-start completed in 177.280942ms
	I1025 18:39:16.532203   82181 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 18:39:16.532278   82181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:39:16.584632   82181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59994 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/old-k8s-version-479000/id_rsa Username:docker}
	I1025 18:39:16.671945   82181 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 18:39:16.677333   82181 fix.go:56] fixHost completed within 5.115939296s
	I1025 18:39:16.677357   82181 start.go:83] releasing machines lock for "old-k8s-version-479000", held for 5.115987336s
	I1025 18:39:16.677451   82181 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-479000
	I1025 18:39:16.729851   82181 ssh_runner.go:195] Run: cat /version.json
	I1025 18:39:16.729869   82181 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:39:16.729918   82181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:39:16.729940   82181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:39:16.787416   82181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59994 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/old-k8s-version-479000/id_rsa Username:docker}
	I1025 18:39:16.787433   82181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59994 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/old-k8s-version-479000/id_rsa Username:docker}
	I1025 18:39:16.979026   82181 ssh_runner.go:195] Run: systemctl --version
	I1025 18:39:16.984675   82181 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 18:39:16.989959   82181 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 18:39:16.990011   82181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 18:39:16.999308   82181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 18:39:17.008981   82181 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1025 18:39:17.008995   82181 start.go:472] detecting cgroup driver to use...
	I1025 18:39:17.009012   82181 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:39:17.009122   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:39:17.025801   82181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1025 18:39:17.036647   82181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:39:17.047350   82181 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:39:17.047431   82181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:39:17.058209   82181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:39:17.068786   82181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:39:17.079294   82181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:39:17.089989   82181 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:39:17.100044   82181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:39:17.110640   82181 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:39:17.119825   82181 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:39:17.128936   82181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:39:17.185021   82181 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:39:17.283421   82181 start.go:472] detecting cgroup driver to use...
	I1025 18:39:17.283444   82181 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:39:17.283519   82181 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:39:17.302974   82181 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 18:39:17.303043   82181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:39:17.316763   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:39:17.336668   82181 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:39:17.342063   82181 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:39:17.366246   82181 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:39:17.386248   82181 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:39:17.483757   82181 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:39:17.577108   82181 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:39:17.577201   82181 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:39:17.596296   82181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:39:17.683559   82181 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:39:17.953709   82181 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:39:17.981273   82181 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:39:18.030456   82181 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
	I1025 18:39:18.030587   82181 cli_runner.go:164] Run: docker exec -t old-k8s-version-479000 dig +short host.docker.internal
	I1025 18:39:18.164011   82181 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 18:39:18.164102   82181 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 18:39:18.169608   82181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:39:18.182965   82181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:39:18.236035   82181 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 18:39:18.236121   82181 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:39:18.258054   82181 docker.go:693] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1025 18:39:18.258068   82181 docker.go:699] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I1025 18:39:18.258120   82181 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 18:39:18.268133   82181 ssh_runner.go:195] Run: which lz4
	I1025 18:39:18.272928   82181 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 18:39:18.277426   82181 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 18:39:18.277449   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I1025 18:39:23.877737   82181 docker.go:657] Took 5.604681 seconds to copy over tarball
	I1025 18:39:23.877800   82181 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 18:39:25.916974   82181 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.039088524s)
	I1025 18:39:25.916993   82181 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 18:39:25.968956   82181 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 18:39:25.979481   82181 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I1025 18:39:25.998563   82181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:39:26.071362   82181 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:39:26.623389   82181 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:39:26.646752   82181 docker.go:693] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1025 18:39:26.646773   82181 docker.go:699] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I1025 18:39:26.646787   82181 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 18:39:26.653795   82181 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1025 18:39:26.654338   82181 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:39:26.654380   82181 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:39:26.654388   82181 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:39:26.654399   82181 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:39:26.654551   82181 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1025 18:39:26.654551   82181 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:39:26.654642   82181 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1025 18:39:26.660296   82181 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:39:26.660392   82181 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1025 18:39:26.660467   82181 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1025 18:39:26.661428   82181 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1025 18:39:26.662092   82181 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:39:26.662177   82181 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:39:26.662423   82181 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:39:26.662462   82181 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:39:27.287746   82181 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:39:27.312480   82181 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1025 18:39:27.312532   82181 docker.go:318] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:39:27.312589   82181 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I1025 18:39:27.336543   82181 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1025 18:39:27.504042   82181 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1025 18:39:27.525941   82181 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1025 18:39:27.525971   82181 docker.go:318] Removing image: registry.k8s.io/pause:3.1
	I1025 18:39:27.526031   82181 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I1025 18:39:27.550094   82181 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1025 18:39:27.831679   82181 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1025 18:39:27.870870   82181 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1025 18:39:27.870916   82181 docker.go:318] Removing image: registry.k8s.io/coredns:1.6.2
	I1025 18:39:27.871025   82181 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I1025 18:39:27.924402   82181 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1025 18:39:28.229208   82181 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1025 18:39:28.252260   82181 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1025 18:39:28.252292   82181 docker.go:318] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1025 18:39:28.252350   82181 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I1025 18:39:28.276903   82181 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1025 18:39:28.554698   82181 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:39:28.578492   82181 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1025 18:39:28.578533   82181 docker.go:318] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:39:28.578593   82181 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1025 18:39:28.604270   82181 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1025 18:39:28.899343   82181 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:39:28.924116   82181 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1025 18:39:28.924144   82181 docker.go:318] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:39:28.924202   82181 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1025 18:39:28.949372   82181 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1025 18:39:29.576943   82181 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:39:29.599139   82181 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1025 18:39:29.599165   82181 docker.go:318] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:39:29.599224   82181 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1025 18:39:29.615000   82181 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:39:29.619417   82181 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1025 18:39:29.636855   82181 cache_images.go:92] LoadImages completed in 2.989963774s
	W1025 18:39:29.636916   82181 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I1025 18:39:29.637009   82181 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:39:29.692750   82181 cni.go:84] Creating CNI manager for ""
	I1025 18:39:29.692776   82181 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 18:39:29.692803   82181 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 18:39:29.692820   82181 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-479000 NodeName:old-k8s-version-479000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1025 18:39:29.692928   82181 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-479000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-479000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:39:29.692996   82181 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-479000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-479000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 18:39:29.693058   82181 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1025 18:39:29.703147   82181 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:39:29.703227   82181 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 18:39:29.712888   82181 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I1025 18:39:29.732144   82181 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:39:29.751265   82181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I1025 18:39:29.769795   82181 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1025 18:39:29.775265   82181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:39:29.788562   82181 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000 for IP: 192.168.67.2
	I1025 18:39:29.788588   82181 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:39:29.788788   82181 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 18:39:29.788893   82181 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 18:39:29.789017   82181 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/client.key
	I1025 18:39:29.789130   82181 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.key.c7fa3a9e
	I1025 18:39:29.789222   82181 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/proxy-client.key
	I1025 18:39:29.789452   82181 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 18:39:29.789497   82181 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 18:39:29.789509   82181 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 18:39:29.789544   82181 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 18:39:29.789583   82181 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:39:29.789612   82181 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 18:39:29.789683   82181 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:39:29.790335   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 18:39:29.814476   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 18:39:29.839458   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 18:39:29.863028   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/old-k8s-version-479000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 18:39:29.888034   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:39:29.912022   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 18:39:29.940930   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:39:29.965715   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:39:29.990593   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 18:39:30.015342   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:39:30.041355   82181 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 18:39:30.066461   82181 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 18:39:30.085130   82181 ssh_runner.go:195] Run: openssl version
	I1025 18:39:30.091689   82181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 18:39:30.102397   82181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 18:39:30.107103   82181 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:39:30.107155   82181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 18:39:30.114455   82181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:39:30.125161   82181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:39:30.137592   82181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:39:30.142544   82181 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:39:30.142594   82181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:39:30.150442   82181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:39:30.160989   82181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 18:39:30.172122   82181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 18:39:30.177631   82181 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:39:30.177681   82181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 18:39:30.186443   82181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 18:39:30.197474   82181 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 18:39:30.202240   82181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 18:39:30.209559   82181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 18:39:30.217231   82181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 18:39:30.224964   82181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 18:39:30.233766   82181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 18:39:30.245365   82181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 18:39:30.257312   82181 kubeadm.go:404] StartCluster: {Name:old-k8s-version-479000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-479000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:39:30.257449   82181 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:39:30.283972   82181 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 18:39:30.295754   82181 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1025 18:39:30.295776   82181 kubeadm.go:636] restartCluster start
	I1025 18:39:30.295849   82181 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 18:39:30.306183   82181 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:30.306267   82181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-479000
	I1025 18:39:30.367544   82181 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-479000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:39:30.367741   82181 kubeconfig.go:146] "old-k8s-version-479000" context is missing from /Users/jenkins/minikube-integration/17488-64832/kubeconfig - will repair!
	I1025 18:39:30.368061   82181 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/kubeconfig: {Name:mka2fd80159d21a18312620daab0f942465327a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:39:30.369397   82181 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 18:39:30.379833   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:30.379940   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:30.391075   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:30.391086   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:30.391138   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:30.402107   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:30.904290   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:30.904438   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:30.917441   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:31.403564   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:31.403795   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:31.415841   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:31.903108   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:31.903219   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:31.916415   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:32.404315   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:32.404440   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:32.415998   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:32.903387   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:32.903559   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:32.916138   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:33.402659   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:33.402850   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:33.415626   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:33.902610   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:33.902831   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:33.916009   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:34.402951   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:34.403160   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:34.415905   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:34.902357   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:34.902490   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:34.913753   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:35.403068   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:35.403174   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:35.415990   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:35.902529   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:35.902664   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:35.915701   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:36.404396   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:36.404503   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:36.417367   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:36.902555   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:36.902698   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:36.915419   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:37.403598   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:37.403658   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:37.414760   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:37.903307   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:37.903448   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:37.915189   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:38.403283   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:38.403441   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:38.416294   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:38.902658   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:38.902790   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:38.915847   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:39.403105   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:39.403266   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:39.416325   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:39.902530   82181 api_server.go:166] Checking apiserver status ...
	I1025 18:39:39.902634   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:39:39.914191   82181 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:39:40.380821   82181 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1025 18:39:40.380922   82181 kubeadm.go:1128] stopping kube-system containers ...
	I1025 18:39:40.381053   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:39:40.404801   82181 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 18:39:40.417657   82181 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:39:40.427601   82181 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Oct 26 01:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Oct 26 01:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Oct 26 01:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Oct 26 01:35 /etc/kubernetes/scheduler.conf
	
	I1025 18:39:40.427662   82181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 18:39:40.437276   82181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 18:39:40.446690   82181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 18:39:40.456101   82181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 18:39:40.465678   82181 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:39:40.475612   82181 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1025 18:39:40.475624   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:39:40.534595   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:39:41.308609   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:39:41.509894   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:39:41.582652   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:39:41.646828   82181 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:39:41.646906   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:41.658180   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:42.169767   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:42.669701   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:43.169724   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:43.669307   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:44.169783   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:44.669866   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:45.169773   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:45.669814   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:46.169751   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:46.670367   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:47.170211   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:47.669366   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:48.169466   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:48.669567   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:49.169279   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:49.669540   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:50.169469   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:50.669648   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:51.169473   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:51.669594   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:52.169562   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:52.669388   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:53.169427   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:53.669471   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:54.169682   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:54.669626   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:55.169638   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:55.669591   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:56.169724   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:56.669579   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:57.169802   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:57.669763   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:58.169616   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:58.669533   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:59.169753   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:39:59.669755   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:00.169628   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:00.669688   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:01.169631   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:01.669738   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:02.169837   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:02.669758   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:03.169880   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:03.669880   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:04.169994   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:04.671748   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:05.170263   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:05.669963   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:06.170226   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:06.669899   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:07.170124   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:07.670033   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:08.170983   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:08.670623   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:09.171179   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:09.670444   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:10.170483   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:10.670585   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:11.171321   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:11.670513   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:12.170536   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:12.670539   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:13.170569   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:13.670637   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:14.170608   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:14.670644   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:15.170631   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:15.670660   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:16.170233   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:16.670683   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:17.170706   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:17.670739   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:18.170727   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:18.670744   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:19.170776   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:19.670775   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:20.170799   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:20.670891   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:21.170951   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:21.670837   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:22.170921   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:22.670857   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:23.170855   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:23.670902   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:24.170957   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:24.670908   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:25.170941   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:25.671567   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:26.170459   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:26.670579   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:27.170640   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:27.670882   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:28.170593   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:28.670606   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:29.170588   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:29.670694   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:30.170890   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:30.670752   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:31.170577   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:31.670865   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:32.171361   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:32.671706   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:33.171500   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:33.671212   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:34.171264   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:34.671235   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:35.171743   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:35.671452   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:36.171250   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:36.671268   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:37.171840   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:37.670864   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:38.171389   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:38.671508   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:39.171476   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:39.671932   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:40.171428   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:40.671418   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:41.171435   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:41.671506   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:40:41.700174   82181 logs.go:284] 0 containers: []
	W1025 18:40:41.700189   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:40:41.700287   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:40:41.728761   82181 logs.go:284] 0 containers: []
	W1025 18:40:41.728777   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:40:41.728849   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:40:41.755039   82181 logs.go:284] 0 containers: []
	W1025 18:40:41.755055   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:40:41.755140   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:40:41.786579   82181 logs.go:284] 0 containers: []
	W1025 18:40:41.786596   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:40:41.786679   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:40:41.814330   82181 logs.go:284] 0 containers: []
	W1025 18:40:41.814345   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:40:41.814419   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:40:41.836001   82181 logs.go:284] 0 containers: []
	W1025 18:40:41.836023   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:40:41.836109   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:40:41.861381   82181 logs.go:284] 0 containers: []
	W1025 18:40:41.861397   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:40:41.861483   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:40:41.886703   82181 logs.go:284] 0 containers: []
	W1025 18:40:41.886717   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:40:41.886731   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:40:41.886745   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:40:41.967057   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:40:41.967076   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:40:42.023351   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:40:42.023381   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:40:42.043718   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:40:42.043734   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:40:42.125520   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:40:42.125536   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:40:42.125545   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:40:44.644206   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:44.656532   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:40:44.675804   82181 logs.go:284] 0 containers: []
	W1025 18:40:44.675817   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:40:44.675892   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:40:44.696845   82181 logs.go:284] 0 containers: []
	W1025 18:40:44.696859   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:40:44.696936   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:40:44.716806   82181 logs.go:284] 0 containers: []
	W1025 18:40:44.716821   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:40:44.716887   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:40:44.736732   82181 logs.go:284] 0 containers: []
	W1025 18:40:44.736746   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:40:44.736820   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:40:44.757627   82181 logs.go:284] 0 containers: []
	W1025 18:40:44.757641   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:40:44.757707   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:40:44.777977   82181 logs.go:284] 0 containers: []
	W1025 18:40:44.777997   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:40:44.778073   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:40:44.799198   82181 logs.go:284] 0 containers: []
	W1025 18:40:44.799213   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:40:44.799283   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:40:44.819399   82181 logs.go:284] 0 containers: []
	W1025 18:40:44.819421   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:40:44.819428   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:40:44.819436   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:40:44.860120   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:40:44.860135   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:40:44.875370   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:40:44.875394   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:40:44.933008   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:40:44.933030   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:40:44.933039   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:40:44.949405   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:40:44.949424   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:40:47.504491   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:47.515990   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:40:47.536810   82181 logs.go:284] 0 containers: []
	W1025 18:40:47.536823   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:40:47.536885   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:40:47.557629   82181 logs.go:284] 0 containers: []
	W1025 18:40:47.557641   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:40:47.557702   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:40:47.578248   82181 logs.go:284] 0 containers: []
	W1025 18:40:47.578262   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:40:47.578327   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:40:47.599598   82181 logs.go:284] 0 containers: []
	W1025 18:40:47.599611   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:40:47.599678   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:40:47.619301   82181 logs.go:284] 0 containers: []
	W1025 18:40:47.619314   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:40:47.619383   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:40:47.639850   82181 logs.go:284] 0 containers: []
	W1025 18:40:47.639864   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:40:47.639930   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:40:47.660708   82181 logs.go:284] 0 containers: []
	W1025 18:40:47.660727   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:40:47.660802   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:40:47.681088   82181 logs.go:284] 0 containers: []
	W1025 18:40:47.681102   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:40:47.681109   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:40:47.681115   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:40:47.695547   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:40:47.695566   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:40:47.754990   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:40:47.755005   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:40:47.755013   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:40:47.771661   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:40:47.771677   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:40:47.826527   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:40:47.826542   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:40:50.369269   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:50.381009   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:40:50.401414   82181 logs.go:284] 0 containers: []
	W1025 18:40:50.401428   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:40:50.401502   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:40:50.423478   82181 logs.go:284] 0 containers: []
	W1025 18:40:50.423495   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:40:50.423582   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:40:50.444448   82181 logs.go:284] 0 containers: []
	W1025 18:40:50.444461   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:40:50.444537   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:40:50.472319   82181 logs.go:284] 0 containers: []
	W1025 18:40:50.472333   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:40:50.472417   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:40:50.493865   82181 logs.go:284] 0 containers: []
	W1025 18:40:50.493879   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:40:50.493952   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:40:50.514742   82181 logs.go:284] 0 containers: []
	W1025 18:40:50.514756   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:40:50.514813   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:40:50.535636   82181 logs.go:284] 0 containers: []
	W1025 18:40:50.535655   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:40:50.535744   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:40:50.556078   82181 logs.go:284] 0 containers: []
	W1025 18:40:50.556094   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:40:50.556103   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:40:50.556110   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:40:50.614571   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:40:50.614583   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:40:50.614590   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:40:50.630926   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:40:50.630944   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:40:50.687032   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:40:50.687046   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:40:50.728425   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:40:50.728445   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:40:53.245919   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:53.258677   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:40:53.278368   82181 logs.go:284] 0 containers: []
	W1025 18:40:53.278382   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:40:53.278449   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:40:53.299108   82181 logs.go:284] 0 containers: []
	W1025 18:40:53.299121   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:40:53.299190   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:40:53.318953   82181 logs.go:284] 0 containers: []
	W1025 18:40:53.318967   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:40:53.319034   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:40:53.339536   82181 logs.go:284] 0 containers: []
	W1025 18:40:53.339549   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:40:53.339631   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:40:53.360198   82181 logs.go:284] 0 containers: []
	W1025 18:40:53.360218   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:40:53.360316   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:40:53.380977   82181 logs.go:284] 0 containers: []
	W1025 18:40:53.380991   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:40:53.381065   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:40:53.402370   82181 logs.go:284] 0 containers: []
	W1025 18:40:53.402384   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:40:53.402452   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:40:53.424363   82181 logs.go:284] 0 containers: []
	W1025 18:40:53.424378   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:40:53.424384   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:40:53.424395   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:40:53.469194   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:40:53.469213   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:40:53.485681   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:40:53.485696   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:40:53.545032   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:40:53.545044   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:40:53.545056   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:40:53.566708   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:40:53.566724   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:40:56.126085   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:56.137717   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:40:56.158346   82181 logs.go:284] 0 containers: []
	W1025 18:40:56.158366   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:40:56.158433   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:40:56.180988   82181 logs.go:284] 0 containers: []
	W1025 18:40:56.181004   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:40:56.181096   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:40:56.201451   82181 logs.go:284] 0 containers: []
	W1025 18:40:56.201469   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:40:56.201538   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:40:56.223940   82181 logs.go:284] 0 containers: []
	W1025 18:40:56.223954   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:40:56.224023   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:40:56.248126   82181 logs.go:284] 0 containers: []
	W1025 18:40:56.248142   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:40:56.248238   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:40:56.271041   82181 logs.go:284] 0 containers: []
	W1025 18:40:56.271065   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:40:56.271137   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:40:56.295221   82181 logs.go:284] 0 containers: []
	W1025 18:40:56.295241   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:40:56.295316   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:40:56.317948   82181 logs.go:284] 0 containers: []
	W1025 18:40:56.317962   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:40:56.317969   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:40:56.317976   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:40:56.359877   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:40:56.359894   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:40:56.374776   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:40:56.374791   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:40:56.444860   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:40:56.444875   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:40:56.444882   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:40:56.475091   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:40:56.475108   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:40:59.040215   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:40:59.062671   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:40:59.090992   82181 logs.go:284] 0 containers: []
	W1025 18:40:59.091010   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:40:59.091090   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:40:59.117553   82181 logs.go:284] 0 containers: []
	W1025 18:40:59.117572   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:40:59.117648   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:40:59.150216   82181 logs.go:284] 0 containers: []
	W1025 18:40:59.150238   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:40:59.150339   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:40:59.187471   82181 logs.go:284] 0 containers: []
	W1025 18:40:59.187505   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:40:59.187596   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:40:59.210450   82181 logs.go:284] 0 containers: []
	W1025 18:40:59.210466   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:40:59.210538   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:40:59.231500   82181 logs.go:284] 0 containers: []
	W1025 18:40:59.231519   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:40:59.231598   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:40:59.260493   82181 logs.go:284] 0 containers: []
	W1025 18:40:59.260516   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:40:59.260642   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:40:59.294186   82181 logs.go:284] 0 containers: []
	W1025 18:40:59.294200   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:40:59.294207   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:40:59.294217   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:40:59.336976   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:40:59.337006   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:40:59.357652   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:40:59.357670   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:40:59.420794   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:40:59.420814   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:40:59.420821   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:40:59.450243   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:40:59.450278   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:02.013522   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:02.029283   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:02.070315   82181 logs.go:284] 0 containers: []
	W1025 18:41:02.070359   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:02.070457   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:02.103500   82181 logs.go:284] 0 containers: []
	W1025 18:41:02.103528   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:02.103599   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:02.129750   82181 logs.go:284] 0 containers: []
	W1025 18:41:02.129768   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:02.129879   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:02.156037   82181 logs.go:284] 0 containers: []
	W1025 18:41:02.156061   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:02.156129   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:02.181661   82181 logs.go:284] 0 containers: []
	W1025 18:41:02.181684   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:02.181774   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:02.209959   82181 logs.go:284] 0 containers: []
	W1025 18:41:02.209973   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:02.210033   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:02.238066   82181 logs.go:284] 0 containers: []
	W1025 18:41:02.238087   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:02.238206   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:02.263121   82181 logs.go:284] 0 containers: []
	W1025 18:41:02.263139   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:02.263150   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:02.263212   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:02.330263   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:02.330285   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:02.376293   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:02.376308   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:02.394101   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:02.394121   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:02.465454   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:02.465469   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:02.465485   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:04.984644   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:04.997392   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:05.021277   82181 logs.go:284] 0 containers: []
	W1025 18:41:05.021292   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:05.021374   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:05.046468   82181 logs.go:284] 0 containers: []
	W1025 18:41:05.046484   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:05.046572   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:05.069447   82181 logs.go:284] 0 containers: []
	W1025 18:41:05.069462   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:05.069542   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:05.095115   82181 logs.go:284] 0 containers: []
	W1025 18:41:05.095128   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:05.095218   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:05.120161   82181 logs.go:284] 0 containers: []
	W1025 18:41:05.120176   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:05.120250   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:05.143678   82181 logs.go:284] 0 containers: []
	W1025 18:41:05.143694   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:05.143781   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:05.167284   82181 logs.go:284] 0 containers: []
	W1025 18:41:05.167301   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:05.167373   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:05.191892   82181 logs.go:284] 0 containers: []
	W1025 18:41:05.191920   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:05.191930   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:05.191940   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:05.237068   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:05.237100   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:05.253346   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:05.253363   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:05.318765   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:05.318777   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:05.318787   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:05.337193   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:05.337210   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:07.896786   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:07.914764   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:07.943979   82181 logs.go:284] 0 containers: []
	W1025 18:41:07.944002   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:07.944103   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:07.977514   82181 logs.go:284] 0 containers: []
	W1025 18:41:07.977529   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:07.977606   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:08.000846   82181 logs.go:284] 0 containers: []
	W1025 18:41:08.000862   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:08.000940   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:08.023414   82181 logs.go:284] 0 containers: []
	W1025 18:41:08.023428   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:08.023496   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:08.049838   82181 logs.go:284] 0 containers: []
	W1025 18:41:08.049857   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:08.049940   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:08.078097   82181 logs.go:284] 0 containers: []
	W1025 18:41:08.078116   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:08.078191   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:08.108236   82181 logs.go:284] 0 containers: []
	W1025 18:41:08.108273   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:08.108376   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:08.135136   82181 logs.go:284] 0 containers: []
	W1025 18:41:08.135150   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:08.135157   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:08.135165   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:08.155839   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:08.155860   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:08.234428   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:08.234449   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:08.234456   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:08.259212   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:08.259234   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:08.337472   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:08.337488   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:10.888835   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:10.900569   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:10.922280   82181 logs.go:284] 0 containers: []
	W1025 18:41:10.922296   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:10.922369   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:10.943499   82181 logs.go:284] 0 containers: []
	W1025 18:41:10.943513   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:10.943590   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:10.965147   82181 logs.go:284] 0 containers: []
	W1025 18:41:10.965164   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:10.965233   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:10.985197   82181 logs.go:284] 0 containers: []
	W1025 18:41:10.985211   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:10.985290   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:11.008378   82181 logs.go:284] 0 containers: []
	W1025 18:41:11.008392   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:11.008461   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:11.030717   82181 logs.go:284] 0 containers: []
	W1025 18:41:11.030730   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:11.030823   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:11.053123   82181 logs.go:284] 0 containers: []
	W1025 18:41:11.053138   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:11.053208   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:11.076739   82181 logs.go:284] 0 containers: []
	W1025 18:41:11.076757   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:11.076767   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:11.076777   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:11.123677   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:11.123696   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:11.140624   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:11.140645   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:11.205005   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:11.205017   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:11.205024   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:11.222851   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:11.222866   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:13.786364   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:13.798442   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:13.821446   82181 logs.go:284] 0 containers: []
	W1025 18:41:13.821460   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:13.821536   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:13.842966   82181 logs.go:284] 0 containers: []
	W1025 18:41:13.842979   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:13.843044   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:13.864330   82181 logs.go:284] 0 containers: []
	W1025 18:41:13.864343   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:13.864415   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:13.886955   82181 logs.go:284] 0 containers: []
	W1025 18:41:13.886975   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:13.887076   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:13.910743   82181 logs.go:284] 0 containers: []
	W1025 18:41:13.910757   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:13.910836   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:13.933356   82181 logs.go:284] 0 containers: []
	W1025 18:41:13.933369   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:13.933435   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:13.957487   82181 logs.go:284] 0 containers: []
	W1025 18:41:13.957502   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:13.957584   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:13.980600   82181 logs.go:284] 0 containers: []
	W1025 18:41:13.980614   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:13.980621   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:13.980627   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:14.029162   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:14.029187   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:14.046912   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:14.046928   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:14.116428   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:14.116445   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:14.116454   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:14.135386   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:14.135407   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:16.698931   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:16.711297   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:16.734423   82181 logs.go:284] 0 containers: []
	W1025 18:41:16.734445   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:16.734536   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:16.760832   82181 logs.go:284] 0 containers: []
	W1025 18:41:16.760845   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:16.760929   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:16.785354   82181 logs.go:284] 0 containers: []
	W1025 18:41:16.785371   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:16.785475   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:16.814266   82181 logs.go:284] 0 containers: []
	W1025 18:41:16.814281   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:16.814350   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:16.835778   82181 logs.go:284] 0 containers: []
	W1025 18:41:16.835791   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:16.835861   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:16.860248   82181 logs.go:284] 0 containers: []
	W1025 18:41:16.860274   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:16.860376   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:16.885716   82181 logs.go:284] 0 containers: []
	W1025 18:41:16.885735   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:16.885805   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:16.908088   82181 logs.go:284] 0 containers: []
	W1025 18:41:16.908102   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:16.908109   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:16.908117   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:16.922250   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:16.922264   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:16.992990   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:16.993015   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:16.993028   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:17.010084   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:17.010101   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:17.072023   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:17.072040   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:19.626049   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:19.637304   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:19.663308   82181 logs.go:284] 0 containers: []
	W1025 18:41:19.663342   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:19.663514   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:19.691052   82181 logs.go:284] 0 containers: []
	W1025 18:41:19.691066   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:19.691135   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:19.712441   82181 logs.go:284] 0 containers: []
	W1025 18:41:19.712455   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:19.712563   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:19.733974   82181 logs.go:284] 0 containers: []
	W1025 18:41:19.733988   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:19.734056   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:19.777965   82181 logs.go:284] 0 containers: []
	W1025 18:41:19.777987   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:19.778097   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:19.799770   82181 logs.go:284] 0 containers: []
	W1025 18:41:19.799789   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:19.799866   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:19.819809   82181 logs.go:284] 0 containers: []
	W1025 18:41:19.819823   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:19.819893   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:19.839279   82181 logs.go:284] 0 containers: []
	W1025 18:41:19.839293   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:19.839301   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:19.839308   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:19.911179   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:19.911195   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:19.911201   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:19.927968   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:19.927987   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:19.983829   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:19.983843   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:20.025285   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:20.025302   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:22.542017   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:22.564166   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:22.598145   82181 logs.go:284] 0 containers: []
	W1025 18:41:22.598160   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:22.598225   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:22.618085   82181 logs.go:284] 0 containers: []
	W1025 18:41:22.618098   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:22.618163   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:22.638589   82181 logs.go:284] 0 containers: []
	W1025 18:41:22.638602   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:22.638676   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:22.663328   82181 logs.go:284] 0 containers: []
	W1025 18:41:22.663346   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:22.663486   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:22.691671   82181 logs.go:284] 0 containers: []
	W1025 18:41:22.691686   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:22.691753   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:22.712238   82181 logs.go:284] 0 containers: []
	W1025 18:41:22.712252   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:22.712326   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:22.732575   82181 logs.go:284] 0 containers: []
	W1025 18:41:22.732589   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:22.732684   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:22.757447   82181 logs.go:284] 0 containers: []
	W1025 18:41:22.757463   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:22.757472   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:22.757481   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:22.806321   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:22.806342   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:22.822260   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:22.822276   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:22.891735   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:22.891754   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:22.891765   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:22.914663   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:22.914682   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:25.488970   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:25.502017   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:25.521587   82181 logs.go:284] 0 containers: []
	W1025 18:41:25.521610   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:25.521683   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:25.542743   82181 logs.go:284] 0 containers: []
	W1025 18:41:25.542758   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:25.542826   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:25.562803   82181 logs.go:284] 0 containers: []
	W1025 18:41:25.562817   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:25.562896   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:25.582740   82181 logs.go:284] 0 containers: []
	W1025 18:41:25.582754   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:25.582829   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:25.604493   82181 logs.go:284] 0 containers: []
	W1025 18:41:25.604505   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:25.604577   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:25.625061   82181 logs.go:284] 0 containers: []
	W1025 18:41:25.625077   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:25.625179   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:25.647640   82181 logs.go:284] 0 containers: []
	W1025 18:41:25.647653   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:25.647728   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:25.671106   82181 logs.go:284] 0 containers: []
	W1025 18:41:25.671123   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:25.671134   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:25.671147   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:25.689385   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:25.689401   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:25.742758   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:25.742772   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:25.781725   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:25.798898   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:25.813485   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:25.813500   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:25.869240   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:28.370914   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:28.383859   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:28.404737   82181 logs.go:284] 0 containers: []
	W1025 18:41:28.404756   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:28.404853   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:28.425630   82181 logs.go:284] 0 containers: []
	W1025 18:41:28.425642   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:28.425715   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:28.445347   82181 logs.go:284] 0 containers: []
	W1025 18:41:28.445361   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:28.445434   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:28.465746   82181 logs.go:284] 0 containers: []
	W1025 18:41:28.465760   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:28.465846   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:28.487345   82181 logs.go:284] 0 containers: []
	W1025 18:41:28.487364   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:28.487431   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:28.507573   82181 logs.go:284] 0 containers: []
	W1025 18:41:28.507587   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:28.507650   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:28.528992   82181 logs.go:284] 0 containers: []
	W1025 18:41:28.529006   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:28.529074   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:28.549300   82181 logs.go:284] 0 containers: []
	W1025 18:41:28.549314   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:28.549321   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:28.549327   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:28.586301   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:28.586316   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:28.601185   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:28.601200   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:28.662133   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:28.662156   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:28.662163   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:28.681675   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:28.681695   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:31.237216   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:31.248763   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:31.269043   82181 logs.go:284] 0 containers: []
	W1025 18:41:31.269060   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:31.269132   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:31.289696   82181 logs.go:284] 0 containers: []
	W1025 18:41:31.289710   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:31.289780   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:31.309234   82181 logs.go:284] 0 containers: []
	W1025 18:41:31.309248   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:31.309316   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:31.329226   82181 logs.go:284] 0 containers: []
	W1025 18:41:31.329240   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:31.329306   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:31.349941   82181 logs.go:284] 0 containers: []
	W1025 18:41:31.349955   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:31.350047   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:31.369348   82181 logs.go:284] 0 containers: []
	W1025 18:41:31.369362   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:31.369432   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:31.390490   82181 logs.go:284] 0 containers: []
	W1025 18:41:31.390504   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:31.390585   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:31.410139   82181 logs.go:284] 0 containers: []
	W1025 18:41:31.410153   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:31.410160   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:31.410167   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:31.448064   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:31.448081   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:31.463082   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:31.463112   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:31.521297   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:31.521312   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:31.521342   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:31.537768   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:31.537783   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:34.093518   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:34.106348   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:34.126354   82181 logs.go:284] 0 containers: []
	W1025 18:41:34.126368   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:34.126446   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:34.148192   82181 logs.go:284] 0 containers: []
	W1025 18:41:34.148215   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:34.148300   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:34.172295   82181 logs.go:284] 0 containers: []
	W1025 18:41:34.172311   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:34.172436   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:34.198101   82181 logs.go:284] 0 containers: []
	W1025 18:41:34.198117   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:34.198207   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:34.221233   82181 logs.go:284] 0 containers: []
	W1025 18:41:34.221251   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:34.221340   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:34.242438   82181 logs.go:284] 0 containers: []
	W1025 18:41:34.242451   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:34.242519   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:34.280198   82181 logs.go:284] 0 containers: []
	W1025 18:41:34.280214   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:34.280307   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:34.299926   82181 logs.go:284] 0 containers: []
	W1025 18:41:34.299940   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:34.299947   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:34.299954   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:34.342169   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:34.342189   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:34.359414   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:34.359439   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:34.425985   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:34.426019   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:34.426033   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:34.445889   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:34.445903   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:37.011628   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:37.024386   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:37.044760   82181 logs.go:284] 0 containers: []
	W1025 18:41:37.044781   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:37.044867   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:37.073578   82181 logs.go:284] 0 containers: []
	W1025 18:41:37.073592   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:37.073682   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:37.105719   82181 logs.go:284] 0 containers: []
	W1025 18:41:37.105734   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:37.105839   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:37.128559   82181 logs.go:284] 0 containers: []
	W1025 18:41:37.128571   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:37.128642   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:37.148664   82181 logs.go:284] 0 containers: []
	W1025 18:41:37.148679   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:37.148748   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:37.177057   82181 logs.go:284] 0 containers: []
	W1025 18:41:37.177125   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:37.177244   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:37.202022   82181 logs.go:284] 0 containers: []
	W1025 18:41:37.202035   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:37.202106   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:37.223206   82181 logs.go:284] 0 containers: []
	W1025 18:41:37.223219   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:37.223226   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:37.223237   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:37.264051   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:37.264073   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:37.282991   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:37.283023   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:37.348964   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:37.348980   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:37.348991   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:37.370568   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:37.370584   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:39.941050   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:39.951948   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:39.971632   82181 logs.go:284] 0 containers: []
	W1025 18:41:39.971650   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:39.971727   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:39.993700   82181 logs.go:284] 0 containers: []
	W1025 18:41:39.993714   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:39.993781   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:40.013780   82181 logs.go:284] 0 containers: []
	W1025 18:41:40.013794   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:40.013865   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:40.033896   82181 logs.go:284] 0 containers: []
	W1025 18:41:40.033910   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:40.033974   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:40.053660   82181 logs.go:284] 0 containers: []
	W1025 18:41:40.053674   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:40.053740   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:40.074304   82181 logs.go:284] 0 containers: []
	W1025 18:41:40.074325   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:40.074394   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:40.094895   82181 logs.go:284] 0 containers: []
	W1025 18:41:40.094908   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:40.094974   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:40.114512   82181 logs.go:284] 0 containers: []
	W1025 18:41:40.114525   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:40.114532   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:40.114539   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:40.129322   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:40.129339   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:40.190867   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:40.190880   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:40.190894   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:40.207447   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:40.207461   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:40.262181   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:40.262195   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:42.808246   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:42.819426   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:42.838908   82181 logs.go:284] 0 containers: []
	W1025 18:41:42.838923   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:42.838993   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:42.858335   82181 logs.go:284] 0 containers: []
	W1025 18:41:42.858348   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:42.858414   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:42.879556   82181 logs.go:284] 0 containers: []
	W1025 18:41:42.879569   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:42.879637   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:42.899372   82181 logs.go:284] 0 containers: []
	W1025 18:41:42.899384   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:42.899450   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:42.919575   82181 logs.go:284] 0 containers: []
	W1025 18:41:42.919589   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:42.919654   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:42.940706   82181 logs.go:284] 0 containers: []
	W1025 18:41:42.940724   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:42.940792   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:42.961239   82181 logs.go:284] 0 containers: []
	W1025 18:41:42.961255   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:42.961333   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:42.984914   82181 logs.go:284] 0 containers: []
	W1025 18:41:42.984928   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:42.984935   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:42.984945   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:43.023648   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:43.023662   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:43.037568   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:43.037582   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:43.095665   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:43.095684   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:43.095698   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:43.112125   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:43.112139   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:45.666938   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:45.678718   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:45.700172   82181 logs.go:284] 0 containers: []
	W1025 18:41:45.700187   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:45.700262   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:45.720327   82181 logs.go:284] 0 containers: []
	W1025 18:41:45.720341   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:45.720412   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:45.740873   82181 logs.go:284] 0 containers: []
	W1025 18:41:45.740888   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:45.740961   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:45.782114   82181 logs.go:284] 0 containers: []
	W1025 18:41:45.801897   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:45.801986   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:45.823573   82181 logs.go:284] 0 containers: []
	W1025 18:41:45.823589   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:45.823658   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:45.843459   82181 logs.go:284] 0 containers: []
	W1025 18:41:45.843472   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:45.843541   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:45.868768   82181 logs.go:284] 0 containers: []
	W1025 18:41:45.868785   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:45.868876   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:45.890789   82181 logs.go:284] 0 containers: []
	W1025 18:41:45.890803   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:45.890811   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:45.890818   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:45.929018   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:45.929033   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:45.943408   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:45.943423   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:46.001997   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:46.002011   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:46.002017   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:46.018947   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:46.018963   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:48.575514   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:48.588361   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:48.608234   82181 logs.go:284] 0 containers: []
	W1025 18:41:48.608247   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:48.608326   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:48.628210   82181 logs.go:284] 0 containers: []
	W1025 18:41:48.628224   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:48.628288   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:48.649245   82181 logs.go:284] 0 containers: []
	W1025 18:41:48.649258   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:48.649336   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:48.670072   82181 logs.go:284] 0 containers: []
	W1025 18:41:48.670087   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:48.670150   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:48.690488   82181 logs.go:284] 0 containers: []
	W1025 18:41:48.690503   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:48.690580   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:48.713299   82181 logs.go:284] 0 containers: []
	W1025 18:41:48.713314   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:48.713406   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:48.734034   82181 logs.go:284] 0 containers: []
	W1025 18:41:48.734051   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:48.734128   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:48.755643   82181 logs.go:284] 0 containers: []
	W1025 18:41:48.755657   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:48.755664   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:48.755670   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:48.788262   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:48.788277   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:48.842149   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:48.842164   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:48.882379   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:48.882396   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:48.897069   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:48.897088   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:48.954431   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:51.455002   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:51.474819   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:51.496295   82181 logs.go:284] 0 containers: []
	W1025 18:41:51.496309   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:51.496386   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:51.516771   82181 logs.go:284] 0 containers: []
	W1025 18:41:51.516786   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:51.516854   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:51.537626   82181 logs.go:284] 0 containers: []
	W1025 18:41:51.537643   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:51.537721   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:51.561636   82181 logs.go:284] 0 containers: []
	W1025 18:41:51.561658   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:51.561783   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:51.591997   82181 logs.go:284] 0 containers: []
	W1025 18:41:51.592011   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:51.592072   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:51.612346   82181 logs.go:284] 0 containers: []
	W1025 18:41:51.612364   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:51.612434   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:51.633786   82181 logs.go:284] 0 containers: []
	W1025 18:41:51.633801   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:51.633863   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:51.655624   82181 logs.go:284] 0 containers: []
	W1025 18:41:51.655649   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:51.655662   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:51.655673   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:51.740230   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:51.740254   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:51.804200   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:51.804217   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:51.818589   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:51.818607   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:51.875539   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:51.875551   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:51.875565   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:54.393034   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:54.405748   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:54.425337   82181 logs.go:284] 0 containers: []
	W1025 18:41:54.425349   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:54.425414   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:54.445274   82181 logs.go:284] 0 containers: []
	W1025 18:41:54.445287   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:54.445354   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:54.466464   82181 logs.go:284] 0 containers: []
	W1025 18:41:54.466477   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:54.466545   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:54.486534   82181 logs.go:284] 0 containers: []
	W1025 18:41:54.486548   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:54.486618   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:54.507425   82181 logs.go:284] 0 containers: []
	W1025 18:41:54.507438   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:54.507507   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:54.527731   82181 logs.go:284] 0 containers: []
	W1025 18:41:54.527745   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:54.527811   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:54.547253   82181 logs.go:284] 0 containers: []
	W1025 18:41:54.547267   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:54.547339   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:54.567530   82181 logs.go:284] 0 containers: []
	W1025 18:41:54.567544   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:54.567551   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:54.567558   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:54.581891   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:54.581905   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:54.641436   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:54.641449   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:54.641465   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:54.658284   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:54.658300   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:41:54.716879   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:54.716903   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:57.258234   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:41:57.271299   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:41:57.290731   82181 logs.go:284] 0 containers: []
	W1025 18:41:57.290744   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:41:57.290812   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:41:57.310850   82181 logs.go:284] 0 containers: []
	W1025 18:41:57.310864   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:41:57.310937   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:41:57.331363   82181 logs.go:284] 0 containers: []
	W1025 18:41:57.331377   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:41:57.331445   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:41:57.352461   82181 logs.go:284] 0 containers: []
	W1025 18:41:57.352475   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:41:57.352540   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:41:57.373114   82181 logs.go:284] 0 containers: []
	W1025 18:41:57.373137   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:41:57.373211   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:41:57.394541   82181 logs.go:284] 0 containers: []
	W1025 18:41:57.394555   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:41:57.394625   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:41:57.414784   82181 logs.go:284] 0 containers: []
	W1025 18:41:57.414798   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:41:57.414870   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:41:57.436384   82181 logs.go:284] 0 containers: []
	W1025 18:41:57.436396   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:41:57.436402   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:41:57.436410   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:41:57.474011   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:41:57.474024   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:41:57.488625   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:41:57.488639   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:41:57.547676   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:41:57.547689   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:41:57.547696   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:41:57.564029   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:41:57.564044   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:00.120068   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:00.133136   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:00.153004   82181 logs.go:284] 0 containers: []
	W1025 18:42:00.153018   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:00.153088   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:00.174832   82181 logs.go:284] 0 containers: []
	W1025 18:42:00.174851   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:00.174914   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:00.194431   82181 logs.go:284] 0 containers: []
	W1025 18:42:00.194445   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:00.194517   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:00.214632   82181 logs.go:284] 0 containers: []
	W1025 18:42:00.214646   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:00.214715   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:00.233990   82181 logs.go:284] 0 containers: []
	W1025 18:42:00.234004   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:00.234080   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:00.253992   82181 logs.go:284] 0 containers: []
	W1025 18:42:00.254005   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:00.254070   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:00.274091   82181 logs.go:284] 0 containers: []
	W1025 18:42:00.274108   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:00.274178   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:00.294663   82181 logs.go:284] 0 containers: []
	W1025 18:42:00.294676   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:00.294683   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:00.294693   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:00.336508   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:00.336523   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:00.351076   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:00.351090   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:00.408365   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:00.408379   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:00.408386   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:00.424770   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:00.424785   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:02.981674   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:02.993230   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:03.013247   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.013259   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:03.013322   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:03.033221   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.033234   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:03.033291   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:03.055295   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.055310   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:03.055376   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:03.078169   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.078182   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:03.078271   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:03.099922   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.099935   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:03.100006   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:03.123324   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.123338   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:03.123396   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:03.145660   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.145671   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:03.145736   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:03.167377   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.167390   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:03.167398   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:03.167409   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:03.207852   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:03.207873   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:03.223354   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:03.223369   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:03.289769   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:03.289782   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:03.289790   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:03.306732   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:03.306755   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:05.870602   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:05.883759   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:05.903227   82181 logs.go:284] 0 containers: []
	W1025 18:42:05.903243   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:05.903317   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:05.923740   82181 logs.go:284] 0 containers: []
	W1025 18:42:05.923753   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:05.923829   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:05.945134   82181 logs.go:284] 0 containers: []
	W1025 18:42:05.945150   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:05.945223   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:05.965941   82181 logs.go:284] 0 containers: []
	W1025 18:42:05.965954   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:05.966027   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:05.993716   82181 logs.go:284] 0 containers: []
	W1025 18:42:05.993729   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:05.993799   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:06.014287   82181 logs.go:284] 0 containers: []
	W1025 18:42:06.014339   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:06.014459   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:06.035988   82181 logs.go:284] 0 containers: []
	W1025 18:42:06.036002   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:06.036069   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:06.056905   82181 logs.go:284] 0 containers: []
	W1025 18:42:06.056919   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:06.056926   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:06.056942   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:06.094581   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:06.094594   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:06.109207   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:06.109220   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:06.166955   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:06.166967   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:06.166974   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:06.183129   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:06.183144   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:08.738136   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:08.750258   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:08.772037   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.772050   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:08.772116   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:08.793588   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.793602   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:08.793685   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:08.817186   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.817200   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:08.817262   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:08.839823   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.839836   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:08.839899   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:08.862903   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.862919   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:08.862987   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:08.887382   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.887399   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:08.887480   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:08.912424   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.912444   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:08.912545   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:08.940104   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.940125   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:08.940136   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:08.940147   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:09.018119   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:09.018136   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:09.069630   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:09.069643   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:09.085265   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:09.085282   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:09.154094   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:09.154109   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:09.154116   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:11.676485   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:11.689450   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:11.709336   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.709349   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:11.709419   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:11.728975   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.728987   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:11.729055   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:11.749619   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.749631   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:11.749700   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:11.770661   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.770675   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:11.770742   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:11.791986   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.792000   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:11.792068   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:11.812462   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.812474   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:11.812540   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:11.832352   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.832365   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:11.832431   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:11.853457   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.853470   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:11.853477   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:11.853484   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:11.913491   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:11.913508   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:11.913515   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:11.931802   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:11.931817   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:12.001077   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:12.001094   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:12.043548   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:12.043566   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:14.559675   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:14.573031   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:14.592894   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.592908   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:14.592985   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:14.613668   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.613680   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:14.613744   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:14.634370   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.634382   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:14.634449   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:14.654123   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.654137   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:14.654212   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:14.674400   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.674413   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:14.674488   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:14.694240   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.694254   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:14.694318   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:14.714702   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.714715   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:14.714788   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:14.735889   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.735902   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:14.735910   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:14.735917   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:14.793730   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:14.793742   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:14.793749   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:14.809751   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:14.809765   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:14.862782   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:14.862797   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:14.899756   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:14.899770   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:17.415881   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:17.428615   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:17.448376   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.448389   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:17.448453   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:17.469473   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.469486   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:17.469548   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:17.490090   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.490109   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:17.490188   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:17.510413   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.510425   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:17.510493   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:17.530323   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.530335   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:17.530400   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:17.549925   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.549938   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:17.550006   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:17.569538   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.569559   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:17.569624   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:17.590102   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.590117   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:17.590128   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:17.590139   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:17.629668   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:17.629681   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:17.644389   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:17.644403   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:17.701382   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:17.701401   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:17.701409   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:17.717911   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:17.717926   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:20.273790   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:20.287156   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:20.306764   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.306777   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:20.306846   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:20.327638   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.327653   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:20.327722   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:20.347636   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.347650   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:20.347715   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:20.368304   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.368315   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:20.368373   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:20.388626   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.388638   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:20.388715   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:20.409999   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.410012   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:20.410086   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:20.429818   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.429830   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:20.429910   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:20.450985   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.450997   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:20.451003   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:20.451010   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:20.467569   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:20.467587   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:20.521550   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:20.521565   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:20.561885   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:20.561908   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:20.577051   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:20.577068   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:20.638042   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:23.138443   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:23.155511   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:23.176069   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.176083   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:23.176162   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:23.197051   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.197064   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:23.197134   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:23.221643   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.221657   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:23.221724   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:23.251876   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.251894   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:23.251993   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:23.289923   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.289936   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:23.290002   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:23.309441   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.309454   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:23.309522   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:23.336977   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.337029   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:23.337176   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:23.360785   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.360799   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:23.360806   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:23.360812   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:23.408253   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:23.408269   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:23.433543   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:23.433568   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:23.494375   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:23.494393   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:23.494401   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:23.512512   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:23.512535   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:26.079147   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:26.092048   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:26.111323   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.111350   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:26.111453   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:26.136937   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.136951   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:26.137019   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:26.158149   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.158163   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:26.158238   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:26.179188   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.179203   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:26.179269   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:26.202527   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.202543   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:26.202613   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:26.224821   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.224836   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:26.224909   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:26.245135   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.245148   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:26.245217   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:26.284062   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.284075   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:26.284089   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:26.284096   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:26.300674   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:26.300689   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:26.353791   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:26.353806   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:26.393216   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:26.393232   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:26.407929   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:26.407944   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:26.466691   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:28.966934   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:28.978454   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:28.999866   82181 logs.go:284] 0 containers: []
	W1025 18:42:28.999879   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:28.999947   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:29.020989   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.021003   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:29.021076   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:29.043200   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.043214   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:29.043280   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:29.065502   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.065515   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:29.065617   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:29.085973   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.085986   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:29.086051   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:29.106325   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.106338   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:29.106400   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:29.127027   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.127040   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:29.127106   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:29.147670   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.147684   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:29.147692   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:29.147699   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:29.192133   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:29.192153   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:29.209030   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:29.209045   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:29.285627   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:29.285642   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:29.285649   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:29.302481   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:29.302496   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:31.860934   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:31.873726   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:31.893030   82181 logs.go:284] 0 containers: []
	W1025 18:42:31.893044   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:31.893110   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:31.914194   82181 logs.go:284] 0 containers: []
	W1025 18:42:31.914206   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:31.914282   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:31.936038   82181 logs.go:284] 0 containers: []
	W1025 18:42:31.936051   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:31.936118   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:31.956278   82181 logs.go:284] 0 containers: []
	W1025 18:42:31.956292   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:31.956361   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:31.977253   82181 logs.go:284] 0 containers: []
	W1025 18:42:31.977268   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:31.977341   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:31.997016   82181 logs.go:284] 0 containers: []
	W1025 18:42:31.997029   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:31.997100   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:32.018599   82181 logs.go:284] 0 containers: []
	W1025 18:42:32.018613   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:32.018712   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:32.038346   82181 logs.go:284] 0 containers: []
	W1025 18:42:32.038358   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:32.038364   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:32.038370   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:32.079868   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:32.079885   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:32.094721   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:32.094740   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:32.152312   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:32.152324   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:32.152331   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:32.170501   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:32.170515   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:34.730788   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:34.743703   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:34.763876   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.763891   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:34.763961   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:34.784936   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.784949   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:34.785015   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:34.804472   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.804492   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:34.804559   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:34.823884   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.823896   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:34.823961   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:34.844084   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.844097   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:34.844163   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:34.866866   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.866880   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:34.866948   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:34.887159   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.887178   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:34.887247   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:34.906508   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.906523   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:34.906532   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:34.906539   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:34.944889   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:34.944903   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:34.959750   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:34.959778   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:35.016706   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:35.016734   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:35.016746   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:35.033235   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:35.033250   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:37.588779   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:37.602341   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:37.621945   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.621962   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:37.622030   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:37.641340   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.641354   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:37.641425   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:37.662695   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.662709   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:37.662774   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:37.682493   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.682507   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:37.682576   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:37.702946   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.702960   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:37.703030   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:37.724197   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.724210   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:37.724272   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:37.745953   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.745966   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:37.746030   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:37.766330   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.766343   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:37.766350   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:37.766357   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:37.806838   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:37.806853   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:37.821448   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:37.821462   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:37.881607   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:37.881620   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:37.881627   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:37.898407   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:37.898421   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:40.454224   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:40.466837   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:40.487415   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.487430   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:40.487493   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:40.507968   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.507982   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:40.508073   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:40.528554   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.528568   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:40.528635   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:40.549343   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.549356   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:40.549424   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:40.570809   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.570821   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:40.570883   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:40.591237   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.591250   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:40.591318   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:40.610639   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.610653   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:40.610723   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:40.631545   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.631558   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:40.631565   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:40.631572   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:40.673711   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:40.673731   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:40.688824   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:40.688841   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:40.745116   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:40.745130   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:40.745138   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:40.761782   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:40.761796   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:43.317606   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:43.330310   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:43.350293   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.350307   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:43.350387   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:43.372357   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.372378   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:43.372504   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:43.399415   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.399430   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:43.399500   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:43.425149   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.425189   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:43.425263   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:43.449720   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.449736   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:43.449804   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:43.479250   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.479263   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:43.479330   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:43.501367   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.501381   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:43.501454   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:43.522434   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.522479   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:43.522497   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:43.522507   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:43.561697   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:43.561712   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:43.575954   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:43.575969   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:43.634693   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:43.634706   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:43.634712   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:43.651593   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:43.651607   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:46.208176   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:46.221278   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:46.240804   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.240817   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:46.240885   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:46.260183   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.260196   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:46.260256   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:46.281753   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.281767   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:46.281835   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:46.302467   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.302481   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:46.302549   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:46.323812   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.323827   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:46.323893   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:46.343177   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.343190   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:46.343261   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:46.362811   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.362823   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:46.362887   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:46.384118   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.384134   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:46.384142   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:46.384149   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:46.424601   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:46.424619   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:46.441513   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:46.441529   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:46.506412   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:46.506439   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:46.506467   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:46.523922   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:46.523937   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:49.080178   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:49.093582   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:49.114320   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.114333   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:49.114395   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:49.134415   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.134429   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:49.134495   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:49.155120   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.155134   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:49.155215   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:49.175847   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.175860   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:49.175926   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:49.196129   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.196143   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:49.196233   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:49.216088   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.216103   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:49.216169   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:49.236056   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.236070   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:49.236136   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:49.256781   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.256794   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:49.256801   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:49.256807   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:49.295689   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:49.295704   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:49.310387   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:49.310403   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:49.367074   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:49.367087   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:49.367095   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:49.383726   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:49.383740   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:51.940672   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:51.952964   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:51.973597   82181 logs.go:284] 0 containers: []
	W1025 18:42:51.973610   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:51.973686   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:51.994516   82181 logs.go:284] 0 containers: []
	W1025 18:42:51.994530   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:51.994597   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:52.014565   82181 logs.go:284] 0 containers: []
	W1025 18:42:52.014579   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:52.014643   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:52.034420   82181 logs.go:284] 0 containers: []
	W1025 18:42:52.034432   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:52.034500   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:52.054542   82181 logs.go:284] 0 containers: []
	W1025 18:42:52.054555   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:52.054624   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:52.074755   82181 logs.go:284] 0 containers: []
	W1025 18:42:52.074768   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:52.074828   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:52.095933   82181 logs.go:284] 0 containers: []
	W1025 18:42:52.095946   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:52.096014   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:52.117534   82181 logs.go:284] 0 containers: []
	W1025 18:42:52.117548   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:52.117555   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:52.117561   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:52.134711   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:52.134725   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:52.190799   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:52.190813   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:52.228623   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:52.228637   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:52.242978   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:52.242992   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:52.306286   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:54.806769   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:54.818109   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:54.838918   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.838932   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:54.839001   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:54.858993   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.859006   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:54.859069   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:54.879936   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.879949   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:54.880017   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:54.900081   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.900094   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:54.900160   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:54.921298   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.921312   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:54.921384   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:54.942270   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.942282   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:54.942366   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:54.963192   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.963205   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:54.963276   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:54.983290   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.983304   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:54.983311   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:54.983318   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:55.026485   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:55.026503   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:55.041679   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:55.041693   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:55.099959   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:55.099978   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:55.099984   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:55.116994   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:55.117007   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:57.671737   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:57.683773   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:57.705587   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.705601   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:57.705681   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:57.725950   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.725964   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:57.726031   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:57.746961   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.746975   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:57.747043   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:57.768795   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.768808   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:57.768884   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:57.790459   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.790479   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:57.790573   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:57.810792   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.810805   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:57.810879   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:57.829887   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.829900   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:57.829966   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:57.849589   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.849603   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:57.849609   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:57.849616   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:57.890886   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:57.890901   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:57.905777   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:57.905814   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:57.963211   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:57.963225   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:57.963231   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:57.979595   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:57.979630   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:00.536356   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:00.549498   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:00.568842   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.568856   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:00.568925   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:00.588876   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.588890   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:00.588954   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:00.609387   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.609401   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:00.609467   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:00.629417   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.629431   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:00.629494   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:00.650837   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.650851   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:00.650917   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:00.673085   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.673099   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:00.673166   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:00.695075   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.695090   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:00.695173   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:00.718193   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.718213   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:00.718222   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:00.718232   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:00.736983   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:00.737000   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:00.807357   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:00.807370   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:00.847513   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:00.847532   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:00.862585   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:00.862600   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:00.921682   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:03.423193   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:03.435921   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:03.456303   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.456316   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:03.456380   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:03.477728   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.477742   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:03.477811   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:03.497851   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.497863   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:03.497929   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:03.519646   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.519663   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:03.519735   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:03.539586   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.539598   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:03.539692   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:03.560211   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.560224   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:03.560289   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:03.580592   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.580612   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:03.580689   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:03.601020   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.601034   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:03.601042   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:03.601049   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:03.642911   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:03.642927   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:03.658021   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:03.658037   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:03.722199   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:03.722212   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:03.722221   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:03.739853   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:03.739868   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:06.320193   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:06.332408   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:06.352229   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.352242   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:06.352308   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:06.373276   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.373289   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:06.373356   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:06.393560   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.393573   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:06.393640   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:06.414379   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.414392   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:06.414463   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:06.434224   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.434237   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:06.434305   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:06.454028   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.454042   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:06.454109   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:06.474589   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.474602   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:06.474664   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:06.495905   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.495919   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:06.495926   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:06.495933   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:06.511990   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:06.512011   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:06.571180   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:06.571194   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:06.613482   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:06.613499   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:06.629150   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:06.629166   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:06.690883   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:09.191452   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:09.203480   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:09.225695   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.225714   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:09.225799   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:09.248101   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.248115   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:09.248182   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:09.271175   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.271187   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:09.271252   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:09.292253   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.292267   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:09.292337   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:09.312357   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.312370   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:09.312438   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:09.332238   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.332252   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:09.332319   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:09.352193   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.352206   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:09.352273   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:09.372910   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.372923   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:09.372930   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:09.372937   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:09.428280   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:09.428295   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:09.470624   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:09.470641   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:09.485889   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:09.485904   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:09.543915   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:09.543928   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:09.543935   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:12.062724   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:12.075835   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:12.095197   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.095211   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:12.095277   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:12.114901   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.114918   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:12.114993   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:12.135976   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.135988   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:12.136057   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:12.156893   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.156915   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:12.156979   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:12.176848   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.176860   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:12.176949   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:12.196985   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.196998   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:12.197064   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:12.217855   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.217875   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:12.217958   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:12.238001   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.238014   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:12.238021   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:12.238028   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:12.294478   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:12.294492   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:12.335740   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:12.335759   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:12.352402   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:12.352419   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:12.409494   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:12.409509   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:12.409516   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:14.927163   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:14.938955   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:14.962097   82181 logs.go:284] 0 containers: []
	W1025 18:43:14.962111   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:14.962194   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:14.983387   82181 logs.go:284] 0 containers: []
	W1025 18:43:14.983401   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:14.983466   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:15.004761   82181 logs.go:284] 0 containers: []
	W1025 18:43:15.004775   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:15.004841   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:15.026314   82181 logs.go:284] 0 containers: []
	W1025 18:43:15.026327   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:15.026400   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:15.047872   82181 logs.go:284] 0 containers: []
	W1025 18:43:15.047885   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:15.047960   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:15.068790   82181 logs.go:284] 0 containers: []
	W1025 18:43:15.068803   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:15.068864   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:15.088555   82181 logs.go:284] 0 containers: []
	W1025 18:43:15.088567   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:15.088642   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:15.111274   82181 logs.go:284] 0 containers: []
	W1025 18:43:15.111288   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:15.111296   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:15.111304   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:15.167796   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:15.167809   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:15.167816   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:15.185028   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:15.185058   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:15.239429   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:15.239444   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:15.279477   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:15.279494   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:17.794637   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:17.806084   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:17.825411   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.825425   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:17.825482   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:17.845376   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.845389   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:17.845457   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:17.866310   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.866324   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:17.866394   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:17.887042   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.887062   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:17.887148   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:17.906779   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.906794   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:17.906860   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:17.927562   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.927578   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:17.927655   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:17.949857   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.949873   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:17.949955   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:17.980633   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.980647   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:17.980653   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:17.980662   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:18.022046   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:18.022064   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:18.037351   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:18.037367   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:18.095231   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:18.095244   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:18.095251   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:18.111818   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:18.111833   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:20.669144   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:20.682043   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:20.702095   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.702109   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:20.702191   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:20.722196   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.722210   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:20.722277   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:20.742742   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.742754   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:20.742824   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:20.762863   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.762875   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:20.762937   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:20.783925   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.783938   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:20.784019   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:20.804856   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.807235   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:20.807302   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:20.827474   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.827487   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:20.827552   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:20.847423   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.847436   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:20.847443   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:20.847451   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:20.885916   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:20.885931   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:20.900556   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:20.900573   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:20.963556   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:20.963569   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:20.963577   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:20.989133   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:20.989154   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:23.547645   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:23.560476   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:23.581218   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.581239   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:23.581317   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:23.601770   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.601784   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:23.601854   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:23.622352   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.622365   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:23.622434   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:23.643376   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.643389   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:23.643459   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:23.663397   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.663410   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:23.663475   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:23.683699   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.683713   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:23.683779   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:23.703526   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.703540   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:23.703607   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:23.724241   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.724258   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:23.724267   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:23.724276   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:23.738784   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:23.738798   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:23.798090   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:23.798110   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:23.798117   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:23.814687   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:23.814702   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:23.868474   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:23.868488   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:26.407583   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:26.420954   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:26.441140   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.441154   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:26.441225   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:26.462682   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.462693   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:26.462762   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:26.483770   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.483783   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:26.483846   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:26.504470   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.504482   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:26.504549   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:26.525960   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.525975   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:26.526042   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:26.545768   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.545782   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:26.545859   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:26.566111   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.566124   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:26.566191   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:26.586312   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.586330   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:26.586340   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:26.586350   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:26.600602   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:26.600616   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:26.663520   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:26.663532   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:26.663539   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:26.680011   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:26.680025   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:26.734173   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:26.734187   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:29.275492   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:29.286455   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:29.306872   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.306887   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:29.306953   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:29.328780   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.328795   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:29.328860   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:29.349088   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.349101   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:29.349165   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:29.368877   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.368890   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:29.368960   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:29.390079   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.390093   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:29.390157   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:29.410893   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.410906   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:29.410972   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:29.432053   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.432066   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:29.432132   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:29.453152   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.453166   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:29.453173   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:29.453180   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:29.493866   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:29.493884   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:29.508439   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:29.508471   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:29.567619   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:29.567643   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:29.567650   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:29.584242   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:29.584256   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:32.137838   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:32.150472   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:32.172239   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.172252   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:32.172318   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:32.193987   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.194000   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:32.194076   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:32.216591   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.216603   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:32.216671   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:32.238603   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.238615   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:32.238683   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:32.285054   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.285066   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:32.285134   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:32.305257   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.305271   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:32.305334   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:32.325346   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.325359   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:32.325425   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:32.345116   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.345131   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:32.345138   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:32.345145   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:32.406368   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:32.406381   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:32.406389   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:32.423858   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:32.423871   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:32.479004   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:32.479019   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:32.518893   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:32.518909   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:35.034745   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:35.047319   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:35.067239   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.067252   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:35.067322   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:35.087659   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.087674   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:35.087741   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:35.109742   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.109754   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:35.109815   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:35.130630   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.130643   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:35.130709   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:35.151290   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.151303   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:35.151371   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:35.172089   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.172104   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:35.172173   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:35.194715   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.194727   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:35.194791   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:35.218163   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.218177   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:35.218184   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:35.218191   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:35.261160   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:35.261182   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:35.286461   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:35.286476   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:35.345203   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:35.345215   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:35.345222   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:35.361709   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:35.361723   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:37.917261   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:37.930232   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:37.949274   82181 logs.go:284] 0 containers: []
	W1025 18:43:37.949288   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:37.949351   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:37.970392   82181 logs.go:284] 0 containers: []
	W1025 18:43:37.970406   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:37.970475   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:37.990131   82181 logs.go:284] 0 containers: []
	W1025 18:43:37.990144   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:37.990210   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:38.011356   82181 logs.go:284] 0 containers: []
	W1025 18:43:38.011370   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:38.011436   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:38.031847   82181 logs.go:284] 0 containers: []
	W1025 18:43:38.031860   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:38.031924   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:38.052287   82181 logs.go:284] 0 containers: []
	W1025 18:43:38.052300   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:38.052362   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:38.072524   82181 logs.go:284] 0 containers: []
	W1025 18:43:38.072537   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:38.072601   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:38.092638   82181 logs.go:284] 0 containers: []
	W1025 18:43:38.092665   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:38.092678   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:38.092688   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:38.106922   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:38.106934   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:38.166697   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:38.166714   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:38.166723   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:38.185304   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:38.185320   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:38.246295   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:38.246312   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:40.808324   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:40.823135   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:40.842626   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.842640   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:40.842709   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:40.863082   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.863095   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:40.863164   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:40.885340   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.885354   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:40.885421   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:40.905392   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.905405   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:40.905469   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:40.926212   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.926226   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:40.926294   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:40.947446   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.947464   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:40.947539   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:40.968767   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.968781   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:40.968846   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:40.990296   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.990309   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:40.990317   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:40.990323   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:41.029147   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:41.029161   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:41.043774   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:41.043791   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:41.102542   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:41.102556   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:41.102562   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:41.119375   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:41.119390   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:43.677177   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:43.689940   82181 kubeadm.go:640] restartCluster took 4m13.375993433s
	W1025 18:43:43.689981   82181 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I1025 18:43:43.689995   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1025 18:43:44.109601   82181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:43:44.121849   82181 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:43:44.131755   82181 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 18:43:44.131814   82181 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:43:44.141941   82181 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:43:44.141970   82181 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 18:43:44.196200   82181 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1025 18:43:44.196259   82181 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 18:43:44.456251   82181 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:43:44.456339   82181 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:43:44.456431   82181 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:43:44.647870   82181 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:43:44.648757   82181 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:43:44.655612   82181 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1025 18:43:44.734043   82181 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:43:44.755577   82181 out.go:204]   - Generating certificates and keys ...
	I1025 18:43:44.755644   82181 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 18:43:44.755713   82181 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 18:43:44.755813   82181 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 18:43:44.755894   82181 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1025 18:43:44.755955   82181 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 18:43:44.756015   82181 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1025 18:43:44.756106   82181 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1025 18:43:44.756168   82181 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1025 18:43:44.756265   82181 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 18:43:44.756346   82181 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 18:43:44.756377   82181 kubeadm.go:322] [certs] Using the existing "sa" key
	I1025 18:43:44.756430   82181 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:43:44.998891   82181 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:43:45.114523   82181 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:43:45.163460   82181 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:43:45.340794   82181 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:43:45.341910   82181 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:43:45.363624   82181 out.go:204]   - Booting up control plane ...
	I1025 18:43:45.363707   82181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:43:45.363778   82181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:43:45.363841   82181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:43:45.363908   82181 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:43:45.364031   82181 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:44:25.353015   82181 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1025 18:44:25.354725   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:44:25.354942   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:44:30.357344   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:44:30.357585   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:44:40.358701   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:44:40.358926   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:45:00.360767   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:45:00.360987   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:45:40.364510   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:45:40.364750   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:45:40.364765   82181 kubeadm.go:322] 
	I1025 18:45:40.364810   82181 kubeadm.go:322] Unfortunately, an error has occurred:
	I1025 18:45:40.364852   82181 kubeadm.go:322] 	timed out waiting for the condition
	I1025 18:45:40.364859   82181 kubeadm.go:322] 
	I1025 18:45:40.364899   82181 kubeadm.go:322] This error is likely caused by:
	I1025 18:45:40.364936   82181 kubeadm.go:322] 	- The kubelet is not running
	I1025 18:45:40.365074   82181 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 18:45:40.365118   82181 kubeadm.go:322] 
	I1025 18:45:40.365225   82181 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 18:45:40.365277   82181 kubeadm.go:322] 	- 'systemctl status kubelet'
	I1025 18:45:40.365344   82181 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I1025 18:45:40.365362   82181 kubeadm.go:322] 
	I1025 18:45:40.365516   82181 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 18:45:40.365644   82181 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1025 18:45:40.365733   82181 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I1025 18:45:40.365772   82181 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I1025 18:45:40.365864   82181 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I1025 18:45:40.365902   82181 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I1025 18:45:40.367617   82181 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1025 18:45:40.367688   82181 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1025 18:45:40.367801   82181 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I1025 18:45:40.367890   82181 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:45:40.367964   82181 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 18:45:40.368017   82181 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W1025 18:45:40.368092   82181 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1025 18:45:40.368120   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1025 18:45:40.786946   82181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:45:40.798986   82181 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 18:45:40.811591   82181 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:45:40.822810   82181 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:45:40.822832   82181 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 18:45:40.877240   82181 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1025 18:45:40.877291   82181 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 18:45:41.141366   82181 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:45:41.141442   82181 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:45:41.141573   82181 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:45:41.336472   82181 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:45:41.337251   82181 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:45:41.341935   82181 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1025 18:45:41.417417   82181 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:45:41.438822   82181 out.go:204]   - Generating certificates and keys ...
	I1025 18:45:41.438907   82181 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 18:45:41.438972   82181 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 18:45:41.439029   82181 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 18:45:41.439086   82181 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1025 18:45:41.439184   82181 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 18:45:41.439222   82181 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1025 18:45:41.439270   82181 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1025 18:45:41.439381   82181 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1025 18:45:41.439500   82181 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 18:45:41.439584   82181 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 18:45:41.439624   82181 kubeadm.go:322] [certs] Using the existing "sa" key
	I1025 18:45:41.439689   82181 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:45:41.533456   82181 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:45:41.752689   82181 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:45:41.896828   82181 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:45:42.085066   82181 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:45:42.085647   82181 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:45:42.107183   82181 out.go:204]   - Booting up control plane ...
	I1025 18:45:42.107381   82181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:45:42.107505   82181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:45:42.107639   82181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:45:42.107791   82181 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:45:42.108009   82181 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:46:22.096014   82181 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1025 18:46:22.096376   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:46:22.096689   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:46:27.098194   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:46:27.098441   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:46:37.099130   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:46:37.099348   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:46:57.101676   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:46:57.101907   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:47:37.103667   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:47:37.103918   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:47:37.103933   82181 kubeadm.go:322] 
	I1025 18:47:37.103975   82181 kubeadm.go:322] Unfortunately, an error has occurred:
	I1025 18:47:37.104049   82181 kubeadm.go:322] 	timed out waiting for the condition
	I1025 18:47:37.104069   82181 kubeadm.go:322] 
	I1025 18:47:37.104118   82181 kubeadm.go:322] This error is likely caused by:
	I1025 18:47:37.104168   82181 kubeadm.go:322] 	- The kubelet is not running
	I1025 18:47:37.104285   82181 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 18:47:37.104294   82181 kubeadm.go:322] 
	I1025 18:47:37.104419   82181 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 18:47:37.104469   82181 kubeadm.go:322] 	- 'systemctl status kubelet'
	I1025 18:47:37.104497   82181 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I1025 18:47:37.104503   82181 kubeadm.go:322] 
	I1025 18:47:37.104575   82181 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 18:47:37.104647   82181 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1025 18:47:37.104731   82181 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I1025 18:47:37.104837   82181 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I1025 18:47:37.104921   82181 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I1025 18:47:37.104947   82181 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I1025 18:47:37.107205   82181 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1025 18:47:37.107291   82181 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1025 18:47:37.107439   82181 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I1025 18:47:37.107546   82181 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:47:37.107636   82181 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 18:47:37.107725   82181 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1025 18:47:37.107741   82181 kubeadm.go:406] StartCluster complete in 8m6.825271613s
	I1025 18:47:37.107874   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:47:37.141468   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.141533   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:47:37.141666   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:47:37.172115   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.172129   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:47:37.172199   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:47:37.194834   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.194847   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:47:37.194905   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:47:37.218832   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.218848   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:47:37.218917   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:47:37.256351   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.256366   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:47:37.256427   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:47:37.284502   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.284514   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:47:37.284567   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:47:37.308850   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.308866   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:47:37.308935   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:47:37.341948   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.341967   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:47:37.341977   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:47:37.341987   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:47:37.400804   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:47:37.400825   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:47:37.419687   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:47:37.419708   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:47:37.497072   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:47:37.497086   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:47:37.497106   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:47:37.514415   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:47:37.514429   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1025 18:47:37.585147   82181 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1025 18:47:37.585175   82181 out.go:239] * 
	* 
	W1025 18:47:37.585219   82181 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 18:47:37.585237   82181 out.go:239] * 
	* 
	W1025 18:47:37.585894   82181 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:47:37.652007   82181 out.go:177] 
	W1025 18:47:37.694124   82181 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 18:47:37.694171   82181 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1025 18:47:37.694185   82181 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1025 18:47:37.735911   82181 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-479000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-479000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-479000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69",
	        "Created": "2023-10-26T01:32:58.324650138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 334177,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:39:11.94787661Z",
	            "FinishedAt": "2023-10-26T01:39:09.148658914Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hosts",
	        "LogPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69-json.log",
	        "Name": "/old-k8s-version-479000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-479000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-479000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-479000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-479000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-479000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc3db0f18f0faa6596591e1d572ee41d081e2b2af745d61195c907cba1db1022",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59994"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59995"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59996"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59992"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59993"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dc3db0f18f0f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-479000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5e3f3c28e57c",
	                        "old-k8s-version-479000"
	                    ],
	                    "NetworkID": "e1c286b1eee5e63f7c876927f11c7e5f513aa124ea1227ec48978fbb98cbe026",
	                    "EndpointID": "a062e5ce1f7c9ea5b00721beec8298e5232dea7572107ad45a21b2733d6f4e61",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000: exit status 2 (501.60546ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-479000 logs -n 25
E1025 18:47:39.489154   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-479000 logs -n 25: (1.988068172s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-143000 sudo crio                            | kubenet-143000         | jenkins | v1.31.2 | 25 Oct 23 18:33 PDT | 25 Oct 23 18:33 PDT |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-143000                                      | kubenet-143000         | jenkins | v1.31.2 | 25 Oct 23 18:33 PDT | 25 Oct 23 18:33 PDT |
	| start   | -p no-preload-622000                                   | no-preload-622000      | jenkins | v1.31.2 | 25 Oct 23 18:33 PDT | 25 Oct 23 18:34 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-622000             | no-preload-622000      | jenkins | v1.31.2 | 25 Oct 23 18:35 PDT | 25 Oct 23 18:35 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-622000                                   | no-preload-622000      | jenkins | v1.31.2 | 25 Oct 23 18:35 PDT | 25 Oct 23 18:35 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-622000                  | no-preload-622000      | jenkins | v1.31.2 | 25 Oct 23 18:35 PDT | 25 Oct 23 18:35 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-622000                                   | no-preload-622000      | jenkins | v1.31.2 | 25 Oct 23 18:35 PDT | 25 Oct 23 18:40 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-479000        | old-k8s-version-479000 | jenkins | v1.31.2 | 25 Oct 23 18:37 PDT |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-479000                              | old-k8s-version-479000 | jenkins | v1.31.2 | 25 Oct 23 18:39 PDT | 25 Oct 23 18:39 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-479000             | old-k8s-version-479000 | jenkins | v1.31.2 | 25 Oct 23 18:39 PDT | 25 Oct 23 18:39 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-479000                              | old-k8s-version-479000 | jenkins | v1.31.2 | 25 Oct 23 18:39 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| ssh     | -p no-preload-622000 sudo                              | no-preload-622000      | jenkins | v1.31.2 | 25 Oct 23 18:40 PDT | 25 Oct 23 18:40 PDT |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p no-preload-622000                                   | no-preload-622000      | jenkins | v1.31.2 | 25 Oct 23 18:40 PDT | 25 Oct 23 18:40 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-622000                                   | no-preload-622000      | jenkins | v1.31.2 | 25 Oct 23 18:40 PDT | 25 Oct 23 18:40 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-622000                                   | no-preload-622000      | jenkins | v1.31.2 | 25 Oct 23 18:40 PDT | 25 Oct 23 18:41 PDT |
	| delete  | -p no-preload-622000                                   | no-preload-622000      | jenkins | v1.31.2 | 25 Oct 23 18:41 PDT | 25 Oct 23 18:41 PDT |
	| start   | -p embed-certs-488000                                  | embed-certs-488000     | jenkins | v1.31.2 | 25 Oct 23 18:41 PDT | 25 Oct 23 18:41 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-488000            | embed-certs-488000     | jenkins | v1.31.2 | 25 Oct 23 18:41 PDT | 25 Oct 23 18:41 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-488000                                  | embed-certs-488000     | jenkins | v1.31.2 | 25 Oct 23 18:41 PDT | 25 Oct 23 18:42 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-488000                 | embed-certs-488000     | jenkins | v1.31.2 | 25 Oct 23 18:42 PDT | 25 Oct 23 18:42 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-488000                                  | embed-certs-488000     | jenkins | v1.31.2 | 25 Oct 23 18:42 PDT | 25 Oct 23 18:47 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                        |         |         |                     |                     |
	| ssh     | -p embed-certs-488000 sudo                             | embed-certs-488000     | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:47 PDT |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p embed-certs-488000                                  | embed-certs-488000     | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:47 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p embed-certs-488000                                  | embed-certs-488000     | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:47 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p embed-certs-488000                                  | embed-certs-488000     | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 18:42:02
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 18:42:02.301173   82708 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:42:02.301455   82708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:42:02.301461   82708 out.go:309] Setting ErrFile to fd 2...
	I1025 18:42:02.301465   82708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:42:02.301646   82708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 18:42:02.303037   82708 out.go:303] Setting JSON to false
	I1025 18:42:02.325198   82708 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":34890,"bootTime":1698249632,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 18:42:02.325304   82708 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 18:42:02.347555   82708 out.go:177] * [embed-certs-488000] minikube v1.31.2 on Darwin 14.0
	I1025 18:42:02.391258   82708 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 18:42:02.391329   82708 notify.go:220] Checking for updates...
	I1025 18:42:02.434980   82708 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:42:02.456275   82708 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 18:42:02.478175   82708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:42:02.499100   82708 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 18:42:02.541016   82708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:42:02.563005   82708 config.go:182] Loaded profile config "embed-certs-488000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:42:02.563746   82708 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 18:42:02.621582   82708 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 18:42:02.621719   82708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:42:02.725778   82708 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 01:42:02.714477536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:42:02.769253   82708 out.go:177] * Using the docker driver based on existing profile
	I1025 18:42:02.790247   82708 start.go:298] selected driver: docker
	I1025 18:42:02.790293   82708 start.go:902] validating driver "docker" against &{Name:embed-certs-488000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-488000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:42:02.790408   82708 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:42:02.794810   82708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:42:02.895262   82708 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 01:42:02.884277694 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:42:02.895510   82708 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:42:02.895565   82708 cni.go:84] Creating CNI manager for ""
	I1025 18:42:02.895579   82708 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:42:02.895591   82708 start_flags.go:323] config:
	{Name:embed-certs-488000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-488000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:42:02.937902   82708 out.go:177] * Starting control plane node embed-certs-488000 in cluster embed-certs-488000
	I1025 18:42:02.958819   82708 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 18:42:03.003797   82708 out.go:177] * Pulling base image ...
	I1025 18:42:03.024844   82708 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:42:03.024872   82708 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 18:42:03.024907   82708 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 18:42:03.024922   82708 cache.go:56] Caching tarball of preloaded images
	I1025 18:42:03.025028   82708 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 18:42:03.025037   82708 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 18:42:03.025494   82708 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/embed-certs-488000/config.json ...
	I1025 18:42:03.081090   82708 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 18:42:03.081109   82708 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 18:42:03.081136   82708 cache.go:194] Successfully downloaded all kic artifacts
	I1025 18:42:03.081185   82708 start.go:365] acquiring machines lock for embed-certs-488000: {Name:mkecb63fbb86e7e885003e8831650f1c38b00aba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:42:03.081278   82708 start.go:369] acquired machines lock for "embed-certs-488000" in 73.124µs
	I1025 18:42:03.081301   82708 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:42:03.081308   82708 fix.go:54] fixHost starting: 
	I1025 18:42:03.081579   82708 cli_runner.go:164] Run: docker container inspect embed-certs-488000 --format={{.State.Status}}
	I1025 18:42:03.138428   82708 fix.go:102] recreateIfNeeded on embed-certs-488000: state=Stopped err=<nil>
	W1025 18:42:03.138469   82708 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 18:42:03.160177   82708 out.go:177] * Restarting existing docker container for "embed-certs-488000" ...
	I1025 18:42:02.981674   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:02.993230   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:03.013247   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.013259   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:03.013322   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:03.033221   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.033234   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:03.033291   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:03.055295   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.055310   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:03.055376   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:03.078169   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.078182   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:03.078271   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:03.099922   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.099935   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:03.100006   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:03.123324   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.123338   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:03.123396   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:03.145660   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.145671   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:03.145736   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:03.167377   82181 logs.go:284] 0 containers: []
	W1025 18:42:03.167390   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:03.167398   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:03.167409   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:03.207852   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:03.207873   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:03.223354   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:03.223369   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:03.289769   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:03.289782   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:03.289790   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:03.306732   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:03.306755   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:03.180831   82708 cli_runner.go:164] Run: docker start embed-certs-488000
	I1025 18:42:03.465233   82708 cli_runner.go:164] Run: docker container inspect embed-certs-488000 --format={{.State.Status}}
	I1025 18:42:03.524613   82708 kic.go:427] container "embed-certs-488000" state is running.
	I1025 18:42:03.525169   82708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-488000
	I1025 18:42:03.584156   82708 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/embed-certs-488000/config.json ...
	I1025 18:42:03.584650   82708 machine.go:88] provisioning docker machine ...
	I1025 18:42:03.584702   82708 ubuntu.go:169] provisioning hostname "embed-certs-488000"
	I1025 18:42:03.584797   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:42:03.642920   82708 main.go:141] libmachine: Using SSH client type: native
	I1025 18:42:03.643318   82708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60120 <nil> <nil>}
	I1025 18:42:03.643336   82708 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-488000 && echo "embed-certs-488000" | sudo tee /etc/hostname
	I1025 18:42:03.645059   82708 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 18:42:06.781640   82708 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-488000
	
	I1025 18:42:06.781761   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:42:06.833773   82708 main.go:141] libmachine: Using SSH client type: native
	I1025 18:42:06.834069   82708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60120 <nil> <nil>}
	I1025 18:42:06.834082   82708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-488000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-488000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-488000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:42:06.959278   82708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:42:06.959300   82708 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 18:42:06.959321   82708 ubuntu.go:177] setting up certificates
	I1025 18:42:06.959328   82708 provision.go:83] configureAuth start
	I1025 18:42:06.959397   82708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-488000
	I1025 18:42:07.011083   82708 provision.go:138] copyHostCerts
	I1025 18:42:07.011171   82708 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 18:42:07.011182   82708 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:42:07.011304   82708 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 18:42:07.011538   82708 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 18:42:07.011544   82708 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:42:07.011607   82708 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 18:42:07.011763   82708 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 18:42:07.011769   82708 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:42:07.011827   82708 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 18:42:07.011973   82708 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.embed-certs-488000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-488000]
	I1025 18:42:07.116919   82708 provision.go:172] copyRemoteCerts
	I1025 18:42:07.116982   82708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:42:07.117041   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:42:07.168955   82708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60120 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/embed-certs-488000/id_rsa Username:docker}
	I1025 18:42:07.259398   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 18:42:07.282068   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 18:42:07.305349   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 18:42:07.346989   82708 provision.go:86] duration metric: configureAuth took 387.622299ms
	I1025 18:42:07.347004   82708 ubuntu.go:193] setting minikube options for container-runtime
	I1025 18:42:07.347140   82708 config.go:182] Loaded profile config "embed-certs-488000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:42:07.347210   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:42:07.398650   82708 main.go:141] libmachine: Using SSH client type: native
	I1025 18:42:07.398944   82708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60120 <nil> <nil>}
	I1025 18:42:07.398955   82708 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:42:07.522127   82708 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 18:42:07.522146   82708 ubuntu.go:71] root file system type: overlay
	I1025 18:42:07.522225   82708 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:42:07.522320   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:42:07.575816   82708 main.go:141] libmachine: Using SSH client type: native
	I1025 18:42:07.576112   82708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60120 <nil> <nil>}
	I1025 18:42:07.576163   82708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:42:07.712287   82708 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:42:07.712396   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:42:07.764259   82708 main.go:141] libmachine: Using SSH client type: native
	I1025 18:42:07.764544   82708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60120 <nil> <nil>}
	I1025 18:42:07.764557   82708 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:42:07.893635   82708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:42:07.893657   82708 machine.go:91] provisioned docker machine in 4.308743761s
	I1025 18:42:07.893666   82708 start.go:300] post-start starting for "embed-certs-488000" (driver="docker")
	I1025 18:42:07.893681   82708 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:42:07.893749   82708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:42:07.893805   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:42:07.944958   82708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60120 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/embed-certs-488000/id_rsa Username:docker}
	I1025 18:42:08.036608   82708 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:42:08.041153   82708 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 18:42:08.041181   82708 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 18:42:08.041189   82708 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 18:42:08.041194   82708 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 18:42:08.041204   82708 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 18:42:08.041299   82708 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 18:42:08.041438   82708 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 18:42:08.041592   82708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:42:08.050788   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:42:08.073569   82708 start.go:303] post-start completed in 179.884036ms
	I1025 18:42:08.073645   82708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 18:42:08.073737   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:42:08.125597   82708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60120 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/embed-certs-488000/id_rsa Username:docker}
	I1025 18:42:08.212892   82708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 18:42:08.218869   82708 fix.go:56] fixHost completed within 5.137280185s
	I1025 18:42:08.218889   82708 start.go:83] releasing machines lock for "embed-certs-488000", held for 5.13732603s
	I1025 18:42:08.219004   82708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-488000
	I1025 18:42:08.272528   82708 ssh_runner.go:195] Run: cat /version.json
	I1025 18:42:08.272551   82708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:42:08.272604   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:42:08.272620   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:42:08.330785   82708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60120 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/embed-certs-488000/id_rsa Username:docker}
	I1025 18:42:08.330787   82708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60120 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/embed-certs-488000/id_rsa Username:docker}
	I1025 18:42:08.528021   82708 ssh_runner.go:195] Run: systemctl --version
	I1025 18:42:08.533524   82708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 18:42:08.539275   82708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 18:42:08.558316   82708 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 18:42:08.558393   82708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 18:42:08.568065   82708 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 18:42:08.568077   82708 start.go:472] detecting cgroup driver to use...
	I1025 18:42:08.568094   82708 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:42:08.568197   82708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:42:08.584780   82708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 18:42:08.595437   82708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:42:08.605874   82708 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:42:08.605934   82708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:42:08.616597   82708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:42:08.627561   82708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:42:08.638485   82708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:42:08.649397   82708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:42:08.659681   82708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:42:08.670201   82708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:42:08.680042   82708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:42:08.689890   82708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:42:08.750233   82708 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:42:08.834994   82708 start.go:472] detecting cgroup driver to use...
	I1025 18:42:08.835017   82708 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:42:08.835105   82708 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:42:08.849746   82708 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 18:42:08.849833   82708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:42:08.864792   82708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:42:08.888278   82708 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:42:08.894003   82708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:42:08.920365   82708 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:42:08.955379   82708 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:42:09.067949   82708 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:42:09.164780   82708 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:42:09.164924   82708 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:42:09.214254   82708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:42:09.277225   82708 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:42:09.590874   82708 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:42:09.654450   82708 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 18:42:09.721099   82708 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:42:09.796473   82708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:42:09.852102   82708 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 18:42:09.877065   82708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:42:09.935907   82708 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 18:42:10.026978   82708 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 18:42:10.027067   82708 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 18:42:10.032346   82708 start.go:540] Will wait 60s for crictl version
	I1025 18:42:10.032407   82708 ssh_runner.go:195] Run: which crictl
	I1025 18:42:10.037078   82708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 18:42:10.083426   82708 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 18:42:10.083528   82708 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:42:10.110690   82708 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:42:05.870602   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:05.883759   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:05.903227   82181 logs.go:284] 0 containers: []
	W1025 18:42:05.903243   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:05.903317   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:05.923740   82181 logs.go:284] 0 containers: []
	W1025 18:42:05.923753   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:05.923829   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:05.945134   82181 logs.go:284] 0 containers: []
	W1025 18:42:05.945150   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:05.945223   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:05.965941   82181 logs.go:284] 0 containers: []
	W1025 18:42:05.965954   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:05.966027   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:05.993716   82181 logs.go:284] 0 containers: []
	W1025 18:42:05.993729   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:05.993799   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:06.014287   82181 logs.go:284] 0 containers: []
	W1025 18:42:06.014339   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:06.014459   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:06.035988   82181 logs.go:284] 0 containers: []
	W1025 18:42:06.036002   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:06.036069   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:06.056905   82181 logs.go:284] 0 containers: []
	W1025 18:42:06.056919   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:06.056926   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:06.056942   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:06.094581   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:06.094594   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:06.109207   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:06.109220   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:06.166955   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:06.166967   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:06.166974   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:06.183129   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:06.183144   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:08.738136   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:08.750258   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:08.772037   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.772050   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:08.772116   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:08.793588   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.793602   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:08.793685   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:08.817186   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.817200   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:08.817262   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:08.839823   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.839836   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:08.839899   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:08.862903   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.862919   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:08.862987   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:08.887382   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.887399   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:08.887480   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:08.912424   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.912444   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:08.912545   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:08.940104   82181 logs.go:284] 0 containers: []
	W1025 18:42:08.940125   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:08.940136   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:08.940147   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:09.018119   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:09.018136   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:09.069630   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:09.069643   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:09.085265   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:09.085282   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:09.154094   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:09.154109   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:09.154116   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:10.160894   82708 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 18:42:10.161018   82708 cli_runner.go:164] Run: docker exec -t embed-certs-488000 dig +short host.docker.internal
	I1025 18:42:10.279342   82708 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 18:42:10.279447   82708 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 18:42:10.284432   82708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:42:10.296353   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:42:10.348738   82708 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:42:10.348812   82708 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:42:10.369995   82708 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1025 18:42:10.370024   82708 docker.go:623] Images already preloaded, skipping extraction
	I1025 18:42:10.370113   82708 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:42:10.391503   82708 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1025 18:42:10.391527   82708 cache_images.go:84] Images are preloaded, skipping loading
	I1025 18:42:10.391597   82708 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:42:10.448105   82708 cni.go:84] Creating CNI manager for ""
	I1025 18:42:10.448124   82708 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:42:10.448151   82708 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 18:42:10.448169   82708 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-488000 NodeName:embed-certs-488000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 18:42:10.448283   82708 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-488000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:42:10.448345   82708 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-488000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-488000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 18:42:10.448409   82708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 18:42:10.460839   82708 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:42:10.460987   82708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 18:42:10.472896   82708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1025 18:42:10.491060   82708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:42:10.510760   82708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I1025 18:42:10.529860   82708 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 18:42:10.534660   82708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:42:10.547496   82708 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/embed-certs-488000 for IP: 192.168.76.2
	I1025 18:42:10.547539   82708 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:42:10.547737   82708 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 18:42:10.547819   82708 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 18:42:10.547936   82708 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/embed-certs-488000/client.key
	I1025 18:42:10.548041   82708 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/embed-certs-488000/apiserver.key.31bdca25
	I1025 18:42:10.548093   82708 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/embed-certs-488000/proxy-client.key
	I1025 18:42:10.548424   82708 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 18:42:10.548458   82708 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 18:42:10.548474   82708 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 18:42:10.548506   82708 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 18:42:10.548556   82708 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:42:10.548597   82708 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 18:42:10.548694   82708 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:42:10.549336   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/embed-certs-488000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 18:42:10.573267   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/embed-certs-488000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 18:42:10.596278   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/embed-certs-488000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 18:42:10.619533   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/embed-certs-488000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 18:42:10.643664   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:42:10.667092   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 18:42:10.690907   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:42:10.714852   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:42:10.737855   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 18:42:10.761463   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:42:10.785173   82708 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 18:42:10.808475   82708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 18:42:10.825733   82708 ssh_runner.go:195] Run: openssl version
	I1025 18:42:10.831811   82708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 18:42:10.842271   82708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 18:42:10.846747   82708 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:42:10.846811   82708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 18:42:10.853835   82708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:42:10.863319   82708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:42:10.873411   82708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:42:10.878019   82708 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:42:10.878066   82708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:42:10.885121   82708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:42:10.894647   82708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 18:42:10.904743   82708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 18:42:10.909404   82708 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:42:10.909452   82708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 18:42:10.916516   82708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 18:42:10.926385   82708 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 18:42:10.930859   82708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 18:42:10.937921   82708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 18:42:10.945105   82708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 18:42:10.952395   82708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 18:42:10.959397   82708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 18:42:10.966435   82708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 18:42:10.973187   82708 kubeadm.go:404] StartCluster: {Name:embed-certs-488000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-488000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:42:10.973301   82708 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:42:10.993036   82708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 18:42:11.002662   82708 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1025 18:42:11.002685   82708 kubeadm.go:636] restartCluster start
	I1025 18:42:11.002738   82708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 18:42:11.011874   82708 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:11.011946   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:42:11.064217   82708 kubeconfig.go:135] verify returned: extract IP: "embed-certs-488000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:42:11.064375   82708 kubeconfig.go:146] "embed-certs-488000" context is missing from /Users/jenkins/minikube-integration/17488-64832/kubeconfig - will repair!
	I1025 18:42:11.064673   82708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/kubeconfig: {Name:mka2fd80159d21a18312620daab0f942465327a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:42:11.066244   82708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 18:42:11.076360   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:11.076433   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:11.087345   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:11.087376   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:11.087491   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:11.098536   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:11.600171   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:11.600414   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:11.613388   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:12.099424   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:12.099541   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:12.111816   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:11.676485   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:11.689450   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:11.709336   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.709349   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:11.709419   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:11.728975   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.728987   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:11.729055   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:11.749619   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.749631   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:11.749700   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:11.770661   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.770675   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:11.770742   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:11.791986   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.792000   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:11.792068   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:11.812462   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.812474   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:11.812540   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:11.832352   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.832365   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:11.832431   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:11.853457   82181 logs.go:284] 0 containers: []
	W1025 18:42:11.853470   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:11.853477   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:11.853484   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:11.913491   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:11.913508   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:11.913515   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:11.931802   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:11.931817   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:12.001077   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:12.001094   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:12.043548   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:12.043566   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:14.559675   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:14.573031   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:14.592894   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.592908   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:14.592985   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:14.613668   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.613680   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:14.613744   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:14.634370   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.634382   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:14.634449   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:14.654123   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.654137   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:14.654212   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:14.674400   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.674413   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:14.674488   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:14.694240   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.694254   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:14.694318   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:14.714702   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.714715   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:14.714788   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:14.735889   82181 logs.go:284] 0 containers: []
	W1025 18:42:14.735902   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:14.735910   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:14.735917   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:14.793730   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:14.793742   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:14.793749   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:14.809751   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:14.809765   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:14.862782   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:14.862797   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:14.899756   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:14.899770   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:12.600742   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:12.600909   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:12.614049   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:13.100812   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:13.101048   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:13.114279   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:13.599411   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:13.599552   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:13.611340   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:14.099324   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:14.099437   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:14.112493   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:14.598793   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:14.598871   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:14.610377   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:15.100270   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:15.100471   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:15.113512   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:15.599950   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:15.600104   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:15.613002   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:16.098851   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:16.098946   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:16.111113   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:16.600432   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:16.600576   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:16.613512   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:17.099292   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:17.099442   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:17.112605   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:17.415881   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:17.428615   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:17.448376   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.448389   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:17.448453   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:17.469473   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.469486   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:17.469548   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:17.490090   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.490109   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:17.490188   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:17.510413   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.510425   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:17.510493   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:17.530323   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.530335   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:17.530400   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:17.549925   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.549938   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:17.550006   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:17.569538   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.569559   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:17.569624   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:17.590102   82181 logs.go:284] 0 containers: []
	W1025 18:42:17.590117   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:17.590128   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:17.590139   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:17.629668   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:17.629681   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:17.644389   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:17.644403   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:17.701382   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:17.701401   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:17.701409   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:17.717911   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:17.717926   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:20.273790   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:20.287156   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:20.306764   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.306777   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:20.306846   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:20.327638   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.327653   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:20.327722   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:20.347636   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.347650   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:20.347715   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:20.368304   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.368315   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:20.368373   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:20.388626   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.388638   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:20.388715   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:20.409999   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.410012   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:20.410086   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:20.429818   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.429830   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:20.429910   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:20.450985   82181 logs.go:284] 0 containers: []
	W1025 18:42:20.450997   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:20.451003   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:20.451010   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:20.467569   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:20.467587   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:20.521550   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:20.521565   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:20.561885   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:20.561908   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:20.577051   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:20.577068   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:20.638042   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:17.599414   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:17.599490   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:17.612753   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:18.099345   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:18.099491   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:18.112364   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:18.598981   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:18.599090   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:18.610942   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:19.099447   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:19.099637   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:19.112462   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:19.599122   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:19.599346   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:19.612350   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:20.101076   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:20.101292   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:20.114742   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:20.600084   82708 api_server.go:166] Checking apiserver status ...
	I1025 18:42:20.600168   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:42:20.613435   82708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:21.077882   82708 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1025 18:42:21.077974   82708 kubeadm.go:1128] stopping kube-system containers ...
	I1025 18:42:21.078090   82708 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:42:21.102514   82708 docker.go:464] Stopping containers: [734dfc74f6b7 726e9244e1ac 9684ac7563da 189c1f395fe5 704003c834c7 dd28dfc6f272 8c7fc1ff0923 bff24befe277 c53c3ce48dc0 ac98bb987ab7 a21a90a308e6 82625e87a571 8108cdc677ba 41e7cae8b5e1 7b3a6bab045a]
	I1025 18:42:21.102585   82708 ssh_runner.go:195] Run: docker stop 734dfc74f6b7 726e9244e1ac 9684ac7563da 189c1f395fe5 704003c834c7 dd28dfc6f272 8c7fc1ff0923 bff24befe277 c53c3ce48dc0 ac98bb987ab7 a21a90a308e6 82625e87a571 8108cdc677ba 41e7cae8b5e1 7b3a6bab045a
	I1025 18:42:21.123884   82708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 18:42:21.136575   82708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:42:21.146081   82708 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct 26 01:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 26 01:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Oct 26 01:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 26 01:41 /etc/kubernetes/scheduler.conf
	
	I1025 18:42:21.146146   82708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 18:42:21.155885   82708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 18:42:21.165476   82708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 18:42:21.174719   82708 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:21.174776   82708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 18:42:21.184015   82708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 18:42:21.193239   82708 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:42:21.193309   82708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 18:42:21.202499   82708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:42:21.212047   82708 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1025 18:42:21.212060   82708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:42:21.263745   82708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:42:21.797096   82708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:42:21.935139   82708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:42:21.990638   82708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:42:22.055373   82708 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:42:22.055449   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:22.068796   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:23.138443   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:23.155511   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:23.176069   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.176083   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:23.176162   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:23.197051   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.197064   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:23.197134   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:23.221643   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.221657   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:23.221724   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:23.251876   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.251894   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:23.251993   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:23.289923   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.289936   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:23.290002   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:23.309441   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.309454   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:23.309522   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:23.336977   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.337029   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:23.337176   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:23.360785   82181 logs.go:284] 0 containers: []
	W1025 18:42:23.360799   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:23.360806   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:23.360812   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:23.408253   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:23.408269   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:23.433543   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:23.433568   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:23.494375   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:23.494393   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:23.494401   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:23.512512   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:23.512535   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:22.625717   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:23.125485   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:23.626841   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:23.640967   82708 api_server.go:72] duration metric: took 1.585534048s to wait for apiserver process to appear ...
	I1025 18:42:23.640980   82708 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:42:23.640995   82708 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60124/healthz ...
	I1025 18:42:25.942160   82708 api_server.go:279] https://127.0.0.1:60124/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 18:42:25.942184   82708 api_server.go:103] status: https://127.0.0.1:60124/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 18:42:25.942207   82708 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60124/healthz ...
	I1025 18:42:26.018556   82708 api_server.go:279] https://127.0.0.1:60124/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 18:42:26.018582   82708 api_server.go:103] status: https://127.0.0.1:60124/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 18:42:26.519359   82708 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60124/healthz ...
	I1025 18:42:26.526249   82708 api_server.go:279] https://127.0.0.1:60124/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:42:26.526267   82708 api_server.go:103] status: https://127.0.0.1:60124/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:42:27.018884   82708 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60124/healthz ...
	I1025 18:42:27.025076   82708 api_server.go:279] https://127.0.0.1:60124/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:42:27.025092   82708 api_server.go:103] status: https://127.0.0.1:60124/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:42:27.518892   82708 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60124/healthz ...
	I1025 18:42:27.527425   82708 api_server.go:279] https://127.0.0.1:60124/healthz returned 200:
	ok
	I1025 18:42:27.538387   82708 api_server.go:141] control plane version: v1.28.3
	I1025 18:42:27.538404   82708 api_server.go:131] duration metric: took 3.897277265s to wait for apiserver health ...
	I1025 18:42:27.538411   82708 cni.go:84] Creating CNI manager for ""
	I1025 18:42:27.538422   82708 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:42:27.560199   82708 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 18:42:26.079147   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:26.092048   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:26.111323   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.111350   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:26.111453   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:26.136937   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.136951   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:26.137019   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:26.158149   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.158163   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:26.158238   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:26.179188   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.179203   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:26.179269   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:26.202527   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.202543   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:26.202613   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:26.224821   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.224836   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:26.224909   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:26.245135   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.245148   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:26.245217   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:26.284062   82181 logs.go:284] 0 containers: []
	W1025 18:42:26.284075   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:26.284089   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:26.284096   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:26.300674   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:26.300689   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:26.353791   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:26.353806   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:26.393216   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:26.393232   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:26.407929   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:26.407944   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:26.466691   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:28.966934   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:28.978454   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:28.999866   82181 logs.go:284] 0 containers: []
	W1025 18:42:28.999879   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:28.999947   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:29.020989   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.021003   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:29.021076   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:29.043200   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.043214   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:29.043280   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:29.065502   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.065515   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:29.065617   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:29.085973   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.085986   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:29.086051   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:29.106325   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.106338   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:29.106400   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:29.127027   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.127040   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:29.127106   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:29.147670   82181 logs.go:284] 0 containers: []
	W1025 18:42:29.147684   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:29.147692   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:29.147699   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:29.192133   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:29.192153   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:29.209030   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:29.209045   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:29.285627   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:29.285642   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:29.285649   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:29.302481   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:29.302496   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:27.580217   82708 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 18:42:27.625965   82708 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1025 18:42:27.728110   82708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 18:42:27.741886   82708 system_pods.go:59] 8 kube-system pods found
	I1025 18:42:27.741910   82708 system_pods.go:61] "coredns-5dd5756b68-2wdjz" [fe1de17c-2fab-4881-a1d8-28fc0f952ffa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:42:27.741919   82708 system_pods.go:61] "etcd-embed-certs-488000" [9d389acd-62b0-47cf-bffd-8bf86a94fe15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 18:42:27.741931   82708 system_pods.go:61] "kube-apiserver-embed-certs-488000" [ae3d4fa8-15e6-4cb5-ba5c-5eaeb490c994] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 18:42:27.741937   82708 system_pods.go:61] "kube-controller-manager-embed-certs-488000" [1b018ae8-0315-4846-9825-f88e487dfb65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 18:42:27.741944   82708 system_pods.go:61] "kube-proxy-fstnb" [2e2011c7-1624-42eb-96bb-015880693ff9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 18:42:27.741949   82708 system_pods.go:61] "kube-scheduler-embed-certs-488000" [3569dac4-b1fd-4476-92e5-e8f430d56891] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 18:42:27.741954   82708 system_pods.go:61] "metrics-server-57f55c9bc5-ltgmx" [ad8f9e5e-55dd-4b09-84e9-2a0783bb37de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 18:42:27.741965   82708 system_pods.go:61] "storage-provisioner" [0916ad35-845e-4089-b848-1a50b0ab03cb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 18:42:27.741970   82708 system_pods.go:74] duration metric: took 13.841179ms to wait for pod list to return data ...
	I1025 18:42:27.741976   82708 node_conditions.go:102] verifying NodePressure condition ...
	I1025 18:42:27.814343   82708 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:42:27.814369   82708 node_conditions.go:123] node cpu capacity is 12
	I1025 18:42:27.814384   82708 node_conditions.go:105] duration metric: took 72.401167ms to run NodePressure ...
	I1025 18:42:27.814415   82708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:42:28.736830   82708 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1025 18:42:28.816644   82708 kubeadm.go:787] kubelet initialised
	I1025 18:42:28.816658   82708 kubeadm.go:788] duration metric: took 79.806939ms waiting for restarted kubelet to initialise ...
	I1025 18:42:28.816665   82708 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 18:42:28.823905   82708 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-2wdjz" in "kube-system" namespace to be "Ready" ...
	I1025 18:42:30.844946   82708 pod_ready.go:102] pod "coredns-5dd5756b68-2wdjz" in "kube-system" namespace has status "Ready":"False"
	I1025 18:42:31.845190   82708 pod_ready.go:92] pod "coredns-5dd5756b68-2wdjz" in "kube-system" namespace has status "Ready":"True"
	I1025 18:42:31.845202   82708 pod_ready.go:81] duration metric: took 3.021172327s waiting for pod "coredns-5dd5756b68-2wdjz" in "kube-system" namespace to be "Ready" ...
	I1025 18:42:31.845209   82708 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:42:31.860934   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:31.873726   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:31.893030   82181 logs.go:284] 0 containers: []
	W1025 18:42:31.893044   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:31.893110   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:31.914194   82181 logs.go:284] 0 containers: []
	W1025 18:42:31.914206   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:31.914282   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:31.936038   82181 logs.go:284] 0 containers: []
	W1025 18:42:31.936051   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:31.936118   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:31.956278   82181 logs.go:284] 0 containers: []
	W1025 18:42:31.956292   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:31.956361   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:31.977253   82181 logs.go:284] 0 containers: []
	W1025 18:42:31.977268   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:31.977341   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:31.997016   82181 logs.go:284] 0 containers: []
	W1025 18:42:31.997029   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:31.997100   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:32.018599   82181 logs.go:284] 0 containers: []
	W1025 18:42:32.018613   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:32.018712   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:32.038346   82181 logs.go:284] 0 containers: []
	W1025 18:42:32.038358   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:32.038364   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:32.038370   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:32.079868   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:32.079885   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:32.094721   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:32.094740   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:32.152312   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:32.152324   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:32.152331   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:32.170501   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:32.170515   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:34.730788   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:34.743703   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:34.763876   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.763891   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:34.763961   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:34.784936   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.784949   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:34.785015   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:34.804472   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.804492   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:34.804559   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:34.823884   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.823896   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:34.823961   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:34.844084   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.844097   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:34.844163   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:34.866866   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.866880   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:34.866948   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:34.887159   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.887178   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:34.887247   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:34.906508   82181 logs.go:284] 0 containers: []
	W1025 18:42:34.906523   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:34.906532   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:34.906539   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:34.944889   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:34.944903   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:34.959750   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:34.959778   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:35.016706   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:35.016734   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:35.016746   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:35.033235   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:35.033250   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:33.861682   82708 pod_ready.go:102] pod "etcd-embed-certs-488000" in "kube-system" namespace has status "Ready":"False"
	I1025 18:42:35.864029   82708 pod_ready.go:102] pod "etcd-embed-certs-488000" in "kube-system" namespace has status "Ready":"False"
	I1025 18:42:37.588779   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:37.602341   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:37.621945   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.621962   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:37.622030   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:37.641340   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.641354   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:37.641425   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:37.662695   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.662709   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:37.662774   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:37.682493   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.682507   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:37.682576   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:37.702946   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.702960   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:37.703030   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:37.724197   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.724210   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:37.724272   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:37.745953   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.745966   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:37.746030   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:37.766330   82181 logs.go:284] 0 containers: []
	W1025 18:42:37.766343   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:37.766350   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:37.766357   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:37.806838   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:37.806853   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:37.821448   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:37.821462   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:37.881607   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:37.881620   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:37.881627   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:37.898407   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:37.898421   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:40.454224   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:40.466837   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:40.487415   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.487430   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:40.487493   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:40.507968   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.507982   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:40.508073   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:40.528554   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.528568   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:40.528635   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:40.549343   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.549356   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:40.549424   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:40.570809   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.570821   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:40.570883   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:40.591237   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.591250   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:40.591318   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:40.610639   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.610653   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:40.610723   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:40.631545   82181 logs.go:284] 0 containers: []
	W1025 18:42:40.631558   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:40.631565   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:40.631572   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:40.673711   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:40.673731   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:40.688824   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:40.688841   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:40.745116   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:40.745130   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:40.745138   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:40.761782   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:40.761796   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:38.365386   82708 pod_ready.go:102] pod "etcd-embed-certs-488000" in "kube-system" namespace has status "Ready":"False"
	I1025 18:42:39.362643   82708 pod_ready.go:92] pod "etcd-embed-certs-488000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:42:39.362655   82708 pod_ready.go:81] duration metric: took 7.517190596s waiting for pod "etcd-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:42:39.362662   82708 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:42:39.367927   82708 pod_ready.go:92] pod "kube-apiserver-embed-certs-488000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:42:39.367937   82708 pod_ready.go:81] duration metric: took 5.270472ms waiting for pod "kube-apiserver-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:42:39.367948   82708 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:42:39.373061   82708 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:42:39.373074   82708 pod_ready.go:81] duration metric: took 5.119948ms waiting for pod "kube-controller-manager-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:42:39.373098   82708 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fstnb" in "kube-system" namespace to be "Ready" ...
	I1025 18:42:39.378450   82708 pod_ready.go:92] pod "kube-proxy-fstnb" in "kube-system" namespace has status "Ready":"True"
	I1025 18:42:39.378460   82708 pod_ready.go:81] duration metric: took 5.354534ms waiting for pod "kube-proxy-fstnb" in "kube-system" namespace to be "Ready" ...
	I1025 18:42:39.378469   82708 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:42:41.266491   82708 pod_ready.go:92] pod "kube-scheduler-embed-certs-488000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:42:41.266504   82708 pod_ready.go:81] duration metric: took 1.887969017s waiting for pod "kube-scheduler-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:42:41.266511   82708 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace to be "Ready" ...
	I1025 18:42:43.317606   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:43.330310   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:43.350293   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.350307   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:43.350387   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:43.372357   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.372378   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:43.372504   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:43.399415   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.399430   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:43.399500   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:43.425149   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.425189   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:43.425263   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:43.449720   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.449736   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:43.449804   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:43.479250   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.479263   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:43.479330   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:43.501367   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.501381   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:43.501454   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:43.522434   82181 logs.go:284] 0 containers: []
	W1025 18:42:43.522479   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:43.522497   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:43.522507   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:43.561697   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:43.561712   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:43.575954   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:43.575969   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:43.634693   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:43.634706   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:43.634712   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:43.651593   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:43.651607   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:43.271764   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:42:45.769499   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:42:46.208176   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:46.221278   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:46.240804   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.240817   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:46.240885   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:46.260183   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.260196   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:46.260256   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:46.281753   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.281767   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:46.281835   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:46.302467   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.302481   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:46.302549   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:46.323812   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.323827   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:46.323893   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:46.343177   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.343190   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:46.343261   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:46.362811   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.362823   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:46.362887   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:46.384118   82181 logs.go:284] 0 containers: []
	W1025 18:42:46.384134   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:46.384142   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:46.384149   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:46.424601   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:46.424619   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:46.441513   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:46.441529   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:46.506412   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:46.506439   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:46.506467   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:46.523922   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:46.523937   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:49.080178   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:49.093582   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:49.114320   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.114333   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:49.114395   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:49.134415   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.134429   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:49.134495   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:49.155120   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.155134   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:49.155215   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:49.175847   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.175860   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:49.175926   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:49.196129   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.196143   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:49.196233   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:49.216088   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.216103   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:49.216169   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:49.236056   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.236070   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:49.236136   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:49.256781   82181 logs.go:284] 0 containers: []
	W1025 18:42:49.256794   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:49.256801   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:49.256807   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:49.295689   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:49.295704   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:49.310387   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:49.310403   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:49.367074   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:49.367087   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:49.367095   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:49.383726   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:49.383740   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:48.269001   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:42:50.769894   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:42:51.940672   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:51.952964   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:51.973597   82181 logs.go:284] 0 containers: []
	W1025 18:42:51.973610   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:51.973686   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:51.994516   82181 logs.go:284] 0 containers: []
	W1025 18:42:51.994530   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:51.994597   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:52.014565   82181 logs.go:284] 0 containers: []
	W1025 18:42:52.014579   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:52.014643   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:52.034420   82181 logs.go:284] 0 containers: []
	W1025 18:42:52.034432   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:52.034500   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:52.054542   82181 logs.go:284] 0 containers: []
	W1025 18:42:52.054555   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:52.054624   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:52.074755   82181 logs.go:284] 0 containers: []
	W1025 18:42:52.074768   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:52.074828   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:52.095933   82181 logs.go:284] 0 containers: []
	W1025 18:42:52.095946   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:52.096014   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:52.117534   82181 logs.go:284] 0 containers: []
	W1025 18:42:52.117548   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:52.117555   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:52.117561   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:52.134711   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:52.134725   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:52.190799   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:52.190813   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:52.228623   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:52.228637   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:52.242978   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:52.242992   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:52.306286   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:54.806769   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:54.818109   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:54.838918   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.838932   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:54.839001   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:54.858993   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.859006   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:54.859069   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:54.879936   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.879949   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:54.880017   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:54.900081   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.900094   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:54.900160   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:54.921298   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.921312   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:54.921384   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:54.942270   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.942282   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:54.942366   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:54.963192   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.963205   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:54.963276   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:54.983290   82181 logs.go:284] 0 containers: []
	W1025 18:42:54.983304   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:54.983311   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:54.983318   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:55.026485   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:55.026503   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:55.041679   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:55.041693   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:55.099959   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:55.099978   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:55.099984   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:55.116994   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:55.117007   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:53.269341   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:42:55.770397   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:42:57.671737   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:42:57.683773   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:42:57.705587   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.705601   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:42:57.705681   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:42:57.725950   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.725964   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:42:57.726031   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:42:57.746961   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.746975   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:42:57.747043   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:42:57.768795   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.768808   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:42:57.768884   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:42:57.790459   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.790479   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:42:57.790573   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:42:57.810792   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.810805   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:42:57.810879   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:42:57.829887   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.829900   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:42:57.829966   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:42:57.849589   82181 logs.go:284] 0 containers: []
	W1025 18:42:57.849603   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:42:57.849609   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:42:57.849616   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:42:57.890886   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:42:57.890901   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:42:57.905777   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:42:57.905814   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:42:57.963211   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:42:57.963225   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:42:57.963231   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:42:57.979595   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:42:57.979630   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:00.536356   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:00.549498   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:00.568842   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.568856   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:00.568925   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:00.588876   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.588890   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:00.588954   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:00.609387   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.609401   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:00.609467   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:00.629417   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.629431   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:00.629494   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:00.650837   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.650851   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:00.650917   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:00.673085   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.673099   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:00.673166   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:00.695075   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.695090   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:00.695173   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:00.718193   82181 logs.go:284] 0 containers: []
	W1025 18:43:00.718213   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:00.718222   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:00.718232   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:00.736983   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:00.737000   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:42:58.271988   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:00.769296   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:00.807357   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:00.807370   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:00.847513   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:00.847532   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:00.862585   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:00.862600   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:00.921682   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:03.423193   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:03.435921   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:03.456303   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.456316   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:03.456380   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:03.477728   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.477742   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:03.477811   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:03.497851   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.497863   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:03.497929   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:03.519646   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.519663   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:03.519735   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:03.539586   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.539598   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:03.539692   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:03.560211   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.560224   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:03.560289   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:03.580592   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.580612   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:03.580689   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:03.601020   82181 logs.go:284] 0 containers: []
	W1025 18:43:03.601034   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:03.601042   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:03.601049   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:03.642911   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:03.642927   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:03.658021   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:03.658037   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:03.722199   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:03.722212   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:03.722221   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:03.739853   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:03.739868   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:02.771041   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:04.771862   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:07.270590   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:06.320193   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:06.332408   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:06.352229   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.352242   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:06.352308   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:06.373276   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.373289   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:06.373356   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:06.393560   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.393573   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:06.393640   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:06.414379   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.414392   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:06.414463   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:06.434224   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.434237   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:06.434305   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:06.454028   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.454042   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:06.454109   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:06.474589   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.474602   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:06.474664   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:06.495905   82181 logs.go:284] 0 containers: []
	W1025 18:43:06.495919   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:06.495926   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:06.495933   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:06.511990   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:06.512011   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:06.571180   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:06.571194   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:06.613482   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:06.613499   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:06.629150   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:06.629166   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:06.690883   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:09.191452   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:09.203480   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:09.225695   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.225714   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:09.225799   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:09.248101   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.248115   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:09.248182   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:09.271175   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.271187   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:09.271252   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:09.292253   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.292267   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:09.292337   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:09.312357   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.312370   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:09.312438   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:09.332238   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.332252   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:09.332319   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:09.352193   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.352206   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:09.352273   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:09.372910   82181 logs.go:284] 0 containers: []
	W1025 18:43:09.372923   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:09.372930   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:09.372937   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:09.428280   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:09.428295   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:09.470624   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:09.470641   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:09.485889   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:09.485904   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:09.543915   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:09.543928   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:09.543935   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:09.770427   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:12.269995   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:12.062724   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:12.075835   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:12.095197   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.095211   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:12.095277   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:12.114901   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.114918   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:12.114993   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:12.135976   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.135988   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:12.136057   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:12.156893   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.156915   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:12.156979   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:12.176848   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.176860   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:12.176949   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:12.196985   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.196998   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:12.197064   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:12.217855   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.217875   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:12.217958   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:12.238001   82181 logs.go:284] 0 containers: []
	W1025 18:43:12.238014   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:12.238021   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:12.238028   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:12.294478   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:12.294492   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:12.335740   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:12.335759   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:12.352402   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:12.352419   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:12.409494   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:12.409509   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:12.409516   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:14.927163   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:14.938955   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:14.962097   82181 logs.go:284] 0 containers: []
	W1025 18:43:14.962111   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:14.962194   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:14.983387   82181 logs.go:284] 0 containers: []
	W1025 18:43:14.983401   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:14.983466   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:15.004761   82181 logs.go:284] 0 containers: []
	W1025 18:43:15.004775   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:15.004841   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:15.026314   82181 logs.go:284] 0 containers: []
	W1025 18:43:15.026327   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:15.026400   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:15.047872   82181 logs.go:284] 0 containers: []
	W1025 18:43:15.047885   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:15.047960   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:15.068790   82181 logs.go:284] 0 containers: []
	W1025 18:43:15.068803   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:15.068864   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:15.088555   82181 logs.go:284] 0 containers: []
	W1025 18:43:15.088567   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:15.088642   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:15.111274   82181 logs.go:284] 0 containers: []
	W1025 18:43:15.111288   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:15.111296   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:15.111304   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:15.167796   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:15.167809   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:15.167816   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:15.185028   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:15.185058   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:15.239429   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:15.239444   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:15.279477   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:15.279494   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:14.770423   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:16.771903   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:17.794637   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:17.806084   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:17.825411   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.825425   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:17.825482   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:17.845376   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.845389   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:17.845457   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:17.866310   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.866324   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:17.866394   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:17.887042   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.887062   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:17.887148   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:17.906779   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.906794   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:17.906860   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:17.927562   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.927578   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:17.927655   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:17.949857   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.949873   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:17.949955   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:17.980633   82181 logs.go:284] 0 containers: []
	W1025 18:43:17.980647   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:17.980653   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:17.980662   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:18.022046   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:18.022064   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:18.037351   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:18.037367   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:18.095231   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:18.095244   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:18.095251   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:18.111818   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:18.111833   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:20.669144   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:20.682043   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:20.702095   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.702109   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:20.702191   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:20.722196   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.722210   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:20.722277   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:20.742742   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.742754   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:20.742824   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:20.762863   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.762875   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:20.762937   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:20.783925   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.783938   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:20.784019   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:19.271511   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:21.769261   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:20.804856   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.807235   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:20.807302   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:20.827474   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.827487   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:20.827552   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:20.847423   82181 logs.go:284] 0 containers: []
	W1025 18:43:20.847436   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:20.847443   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:20.847451   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:20.885916   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:20.885931   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:20.900556   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:20.900573   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:20.963556   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:20.963569   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:20.963577   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:20.989133   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:20.989154   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:23.547645   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:23.560476   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:23.581218   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.581239   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:23.581317   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:23.601770   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.601784   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:23.601854   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:23.622352   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.622365   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:23.622434   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:23.643376   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.643389   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:23.643459   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:23.663397   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.663410   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:23.663475   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:23.683699   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.683713   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:23.683779   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:23.703526   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.703540   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:23.703607   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:23.724241   82181 logs.go:284] 0 containers: []
	W1025 18:43:23.724258   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:23.724267   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:23.724276   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:23.738784   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:23.738798   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:23.798090   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:23.798110   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:23.798117   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:23.814687   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:23.814702   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:23.868474   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:23.868488   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:23.769755   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:26.269948   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:26.407583   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:26.420954   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:26.441140   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.441154   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:26.441225   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:26.462682   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.462693   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:26.462762   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:26.483770   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.483783   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:26.483846   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:26.504470   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.504482   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:26.504549   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:26.525960   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.525975   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:26.526042   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:26.545768   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.545782   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:26.545859   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:26.566111   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.566124   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:26.566191   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:26.586312   82181 logs.go:284] 0 containers: []
	W1025 18:43:26.586330   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:26.586340   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:26.586350   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:26.600602   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:26.600616   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:26.663520   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:26.663532   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:26.663539   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:26.680011   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:26.680025   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:26.734173   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:26.734187   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:29.275492   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:29.286455   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:29.306872   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.306887   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:29.306953   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:29.328780   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.328795   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:29.328860   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:29.349088   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.349101   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:29.349165   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:29.368877   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.368890   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:29.368960   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:29.390079   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.390093   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:29.390157   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:29.410893   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.410906   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:29.410972   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:29.432053   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.432066   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:29.432132   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:29.453152   82181 logs.go:284] 0 containers: []
	W1025 18:43:29.453166   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:29.453173   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:29.453180   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:29.493866   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:29.493884   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:29.508439   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:29.508471   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:29.567619   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:29.567643   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:29.567650   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:29.584242   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:29.584256   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:28.272351   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:30.769451   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:32.137838   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:32.150472   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:32.172239   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.172252   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:32.172318   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:32.193987   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.194000   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:32.194076   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:32.216591   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.216603   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:32.216671   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:32.238603   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.238615   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:32.238683   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:32.285054   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.285066   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:32.285134   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:32.305257   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.305271   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:32.305334   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:32.325346   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.325359   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:32.325425   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:32.345116   82181 logs.go:284] 0 containers: []
	W1025 18:43:32.345131   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:32.345138   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:32.345145   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:32.406368   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:32.406381   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:32.406389   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:32.423858   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:32.423871   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:32.479004   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:32.479019   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:32.518893   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:32.518909   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:35.034745   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:35.047319   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:35.067239   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.067252   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:35.067322   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:35.087659   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.087674   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:35.087741   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:35.109742   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.109754   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:35.109815   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:35.130630   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.130643   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:35.130709   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:35.151290   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.151303   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:35.151371   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:35.172089   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.172104   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:35.172173   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:35.194715   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.194727   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:35.194791   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:35.218163   82181 logs.go:284] 0 containers: []
	W1025 18:43:35.218177   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:35.218184   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:35.218191   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:35.261160   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:35.261182   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:35.286461   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:35.286476   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:35.345203   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:35.345215   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:35.345222   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:35.361709   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:35.361723   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:32.770304   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:34.771589   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:37.271789   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:37.917261   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:37.930232   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:37.949274   82181 logs.go:284] 0 containers: []
	W1025 18:43:37.949288   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:37.949351   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:37.970392   82181 logs.go:284] 0 containers: []
	W1025 18:43:37.970406   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:37.970475   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:37.990131   82181 logs.go:284] 0 containers: []
	W1025 18:43:37.990144   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:37.990210   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:38.011356   82181 logs.go:284] 0 containers: []
	W1025 18:43:38.011370   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:38.011436   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:38.031847   82181 logs.go:284] 0 containers: []
	W1025 18:43:38.031860   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:38.031924   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:38.052287   82181 logs.go:284] 0 containers: []
	W1025 18:43:38.052300   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:38.052362   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:38.072524   82181 logs.go:284] 0 containers: []
	W1025 18:43:38.072537   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:38.072601   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:38.092638   82181 logs.go:284] 0 containers: []
	W1025 18:43:38.092665   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:38.092678   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:38.092688   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:38.106922   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:38.106934   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:38.166697   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:38.166714   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:38.166723   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:38.185304   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:38.185320   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:38.246295   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:38.246312   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:39.273361   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:41.771266   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:40.808324   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:40.823135   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:43:40.842626   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.842640   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:43:40.842709   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:43:40.863082   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.863095   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:43:40.863164   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:43:40.885340   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.885354   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:43:40.885421   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:43:40.905392   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.905405   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:43:40.905469   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:43:40.926212   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.926226   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:43:40.926294   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:43:40.947446   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.947464   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:43:40.947539   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:43:40.968767   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.968781   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:43:40.968846   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:43:40.990296   82181 logs.go:284] 0 containers: []
	W1025 18:43:40.990309   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:43:40.990317   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:43:40.990323   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:43:41.029147   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:43:41.029161   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:43:41.043774   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:43:41.043791   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:43:41.102542   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:43:41.102556   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:43:41.102562   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:43:41.119375   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:43:41.119390   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:43:43.677177   82181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:43.689940   82181 kubeadm.go:640] restartCluster took 4m13.375993433s
	W1025 18:43:43.689981   82181 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I1025 18:43:43.689995   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1025 18:43:44.109601   82181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:43:44.121849   82181 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:43:44.131755   82181 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 18:43:44.131814   82181 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:43:44.141941   82181 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:43:44.141970   82181 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 18:43:44.196200   82181 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1025 18:43:44.196259   82181 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 18:43:44.456251   82181 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:43:44.456339   82181 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:43:44.456431   82181 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:43:44.647870   82181 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:43:44.648757   82181 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:43:44.655612   82181 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1025 18:43:44.734043   82181 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:43:44.755577   82181 out.go:204]   - Generating certificates and keys ...
	I1025 18:43:44.755644   82181 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 18:43:44.755713   82181 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 18:43:44.755813   82181 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 18:43:44.755894   82181 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1025 18:43:44.755955   82181 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 18:43:44.756015   82181 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1025 18:43:44.756106   82181 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1025 18:43:44.756168   82181 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1025 18:43:44.756265   82181 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 18:43:44.756346   82181 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 18:43:44.756377   82181 kubeadm.go:322] [certs] Using the existing "sa" key
	I1025 18:43:44.756430   82181 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:43:44.998891   82181 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:43:45.114523   82181 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:43:45.163460   82181 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:43:45.340794   82181 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:43:45.341910   82181 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:43:45.363624   82181 out.go:204]   - Booting up control plane ...
	I1025 18:43:45.363707   82181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:43:45.363778   82181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:43:45.363841   82181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:43:45.363908   82181 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:43:45.364031   82181 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:43:44.271134   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:46.770321   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:49.271360   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:51.771532   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:54.270790   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:56.271158   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:43:58.274188   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:00.772250   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:03.270556   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:05.272676   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:07.771169   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:09.774169   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:12.271599   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:14.772500   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:17.270979   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:19.271390   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:21.771225   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:25.353015   82181 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1025 18:44:25.354725   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:44:25.354942   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:44:23.774813   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:26.274875   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:30.357344   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:44:30.357585   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:44:28.771030   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:30.772130   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:32.773410   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:35.274912   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:40.358701   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:44:40.358926   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:44:37.772004   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:39.774059   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:42.271863   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:44.274478   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:46.773119   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:48.773436   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:50.774334   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:53.275065   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:44:55.772136   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:00.360767   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:45:00.360987   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:44:57.774204   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:00.272777   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:02.273670   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:04.772577   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:06.775012   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:09.273135   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:11.274428   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:13.774220   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:15.775161   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:18.273152   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:20.273892   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:22.274740   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:24.774474   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:27.273193   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:29.275070   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:31.773723   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:33.776197   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:36.274597   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:40.364510   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:45:40.364750   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:45:40.364765   82181 kubeadm.go:322] 
	I1025 18:45:40.364810   82181 kubeadm.go:322] Unfortunately, an error has occurred:
	I1025 18:45:40.364852   82181 kubeadm.go:322] 	timed out waiting for the condition
	I1025 18:45:40.364859   82181 kubeadm.go:322] 
	I1025 18:45:40.364899   82181 kubeadm.go:322] This error is likely caused by:
	I1025 18:45:40.364936   82181 kubeadm.go:322] 	- The kubelet is not running
	I1025 18:45:40.365074   82181 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 18:45:40.365118   82181 kubeadm.go:322] 
	I1025 18:45:40.365225   82181 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 18:45:40.365277   82181 kubeadm.go:322] 	- 'systemctl status kubelet'
	I1025 18:45:40.365344   82181 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I1025 18:45:40.365362   82181 kubeadm.go:322] 
	I1025 18:45:40.365516   82181 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 18:45:40.365644   82181 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1025 18:45:40.365733   82181 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I1025 18:45:40.365772   82181 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I1025 18:45:40.365864   82181 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I1025 18:45:40.365902   82181 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I1025 18:45:40.367617   82181 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1025 18:45:40.367688   82181 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1025 18:45:40.367801   82181 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I1025 18:45:40.367890   82181 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:45:40.367964   82181 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 18:45:40.368017   82181 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W1025 18:45:40.368092   82181 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1025 18:45:40.368120   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1025 18:45:40.786946   82181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:45:40.798986   82181 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 18:45:40.811591   82181 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:45:40.822810   82181 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:45:40.822832   82181 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 18:45:40.877240   82181 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1025 18:45:40.877291   82181 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 18:45:41.141366   82181 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:45:41.141442   82181 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:45:41.141573   82181 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:45:41.336472   82181 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:45:41.337251   82181 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:45:41.341935   82181 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1025 18:45:41.417417   82181 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:45:41.438822   82181 out.go:204]   - Generating certificates and keys ...
	I1025 18:45:41.438907   82181 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 18:45:41.438972   82181 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 18:45:41.439029   82181 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 18:45:41.439086   82181 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1025 18:45:41.439184   82181 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 18:45:41.439222   82181 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1025 18:45:41.439270   82181 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1025 18:45:41.439381   82181 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1025 18:45:41.439500   82181 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 18:45:41.439584   82181 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 18:45:41.439624   82181 kubeadm.go:322] [certs] Using the existing "sa" key
	I1025 18:45:41.439689   82181 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:45:41.533456   82181 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:45:41.752689   82181 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:45:41.896828   82181 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:45:42.085066   82181 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:45:42.085647   82181 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:45:38.775123   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:41.274513   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:42.107183   82181 out.go:204]   - Booting up control plane ...
	I1025 18:45:42.107381   82181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:45:42.107505   82181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:45:42.107639   82181 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:45:42.107791   82181 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:45:42.108009   82181 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:45:43.275170   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:45.275511   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:47.773958   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:49.775601   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:51.775690   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:54.275872   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:56.774532   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:45:58.775964   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:01.274320   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:03.276288   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:05.774641   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:07.776313   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:10.275837   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:12.278004   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:14.775542   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:17.276116   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:19.277006   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:21.775054   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:22.096014   82181 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1025 18:46:22.096376   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:46:22.096689   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:46:23.776021   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:25.776331   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:27.098194   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:46:27.098441   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:46:27.776606   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:29.777573   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:32.275611   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:34.775871   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:36.776633   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:37.099130   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:46:37.099348   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:46:39.278081   82708 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace has status "Ready":"False"
	I1025 18:46:41.276270   82708 pod_ready.go:81] duration metric: took 4m0.002514585s waiting for pod "metrics-server-57f55c9bc5-ltgmx" in "kube-system" namespace to be "Ready" ...
	E1025 18:46:41.276283   82708 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1025 18:46:41.276297   82708 pod_ready.go:38] duration metric: took 4m12.451970801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 18:46:41.276317   82708 kubeadm.go:640] restartCluster took 4m30.265255925s
	W1025 18:46:41.276352   82708 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1025 18:46:41.276368   82708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1025 18:46:48.235741   82708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (6.959150611s)
	I1025 18:46:48.235805   82708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:46:48.248602   82708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:46:48.258703   82708 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1025 18:46:48.258753   82708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:46:48.268304   82708 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:46:48.268339   82708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 18:46:48.314740   82708 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1025 18:46:48.314779   82708 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 18:46:48.442820   82708 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:46:48.442919   82708 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:46:48.443008   82708 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:46:48.792071   82708 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:46:48.818302   82708 out.go:204]   - Generating certificates and keys ...
	I1025 18:46:48.818390   82708 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 18:46:48.818477   82708 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 18:46:48.818589   82708 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 18:46:48.818654   82708 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1025 18:46:48.818736   82708 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 18:46:48.818794   82708 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1025 18:46:48.818851   82708 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1025 18:46:48.818906   82708 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1025 18:46:48.818980   82708 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 18:46:48.819051   82708 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 18:46:48.819090   82708 kubeadm.go:322] [certs] Using the existing "sa" key
	I1025 18:46:48.819146   82708 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:46:49.011413   82708 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:46:49.160804   82708 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:46:49.219936   82708 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:46:49.311861   82708 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:46:49.313148   82708 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:46:49.315039   82708 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:46:49.336685   82708 out.go:204]   - Booting up control plane ...
	I1025 18:46:49.336779   82708 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:46:49.336853   82708 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:46:49.336922   82708 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:46:49.337007   82708 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:46:49.337071   82708 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:46:49.337100   82708 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 18:46:49.408718   82708 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:46:54.412121   82708 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.003002 seconds
	I1025 18:46:54.412294   82708 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 18:46:54.423021   82708 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 18:46:54.941832   82708 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 18:46:54.941993   82708 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-488000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 18:46:55.450631   82708 kubeadm.go:322] [bootstrap-token] Using token: o7yzmn.wqac9mj84v0mcdjp
	I1025 18:46:55.471881   82708 out.go:204]   - Configuring RBAC rules ...
	I1025 18:46:55.471985   82708 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 18:46:55.511551   82708 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 18:46:55.518496   82708 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 18:46:55.521191   82708 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 18:46:55.523752   82708 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 18:46:55.527353   82708 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 18:46:55.536506   82708 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 18:46:55.674598   82708 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1025 18:46:55.924957   82708 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1025 18:46:55.925693   82708 kubeadm.go:322] 
	I1025 18:46:55.925772   82708 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1025 18:46:55.925785   82708 kubeadm.go:322] 
	I1025 18:46:55.925915   82708 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1025 18:46:55.925930   82708 kubeadm.go:322] 
	I1025 18:46:55.925954   82708 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1025 18:46:55.926014   82708 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 18:46:55.926093   82708 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 18:46:55.926107   82708 kubeadm.go:322] 
	I1025 18:46:55.926162   82708 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1025 18:46:55.926168   82708 kubeadm.go:322] 
	I1025 18:46:55.926230   82708 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 18:46:55.926248   82708 kubeadm.go:322] 
	I1025 18:46:55.926322   82708 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1025 18:46:55.926405   82708 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 18:46:55.926541   82708 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 18:46:55.926561   82708 kubeadm.go:322] 
	I1025 18:46:55.926680   82708 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 18:46:55.926758   82708 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1025 18:46:55.926768   82708 kubeadm.go:322] 
	I1025 18:46:55.926855   82708 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token o7yzmn.wqac9mj84v0mcdjp \
	I1025 18:46:55.927006   82708 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d \
	I1025 18:46:55.927035   82708 kubeadm.go:322] 	--control-plane 
	I1025 18:46:55.927041   82708 kubeadm.go:322] 
	I1025 18:46:55.927186   82708 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1025 18:46:55.927202   82708 kubeadm.go:322] 
	I1025 18:46:55.927308   82708 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token o7yzmn.wqac9mj84v0mcdjp \
	I1025 18:46:55.927455   82708 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a11d27cb57258687c8842495d6fad151b3cc25aa0ab651613c1e45593bda327d 
	I1025 18:46:55.930815   82708 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1025 18:46:55.931008   82708 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:46:55.931032   82708 cni.go:84] Creating CNI manager for ""
	I1025 18:46:55.931048   82708 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:46:55.989479   82708 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 18:46:56.026562   82708 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 18:46:56.045615   82708 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1025 18:46:56.129726   82708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 18:46:56.129802   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:46:56.129802   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc minikube.k8s.io/name=embed-certs-488000 minikube.k8s.io/updated_at=2023_10_25T18_46_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:46:56.140222   82708 ops.go:34] apiserver oom_adj: -16
	I1025 18:46:56.233753   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:46:56.310980   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:46:56.883056   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:46:57.101676   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:46:57.101907   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:46:57.382897   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:46:57.883120   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:46:58.384812   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:46:58.884053   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:46:59.383121   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:46:59.883115   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:00.383194   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:00.883562   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:01.383098   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:01.884424   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:02.385093   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:02.883162   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:03.384234   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:03.885202   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:04.383707   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:04.885252   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:05.384535   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:05.884009   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:06.384412   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:06.884231   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:07.383530   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:07.884558   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:08.383342   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:08.884024   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:09.384173   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:09.884507   82708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:09.955678   82708 kubeadm.go:1081] duration metric: took 13.825519677s to wait for elevateKubeSystemPrivileges.
	I1025 18:47:09.955698   82708 kubeadm.go:406] StartCluster complete in 4m58.973288706s
	I1025 18:47:09.955715   82708 settings.go:142] acquiring lock: {Name:mkca0a8fe84aa865309571104a1d51551b90d38c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:47:09.955826   82708 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:47:09.957029   82708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/kubeconfig: {Name:mka2fd80159d21a18312620daab0f942465327a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:47:09.957412   82708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 18:47:09.957405   82708 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 18:47:09.957472   82708 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-488000"
	I1025 18:47:09.957491   82708 addons.go:69] Setting dashboard=true in profile "embed-certs-488000"
	I1025 18:47:09.957497   82708 addons.go:69] Setting metrics-server=true in profile "embed-certs-488000"
	I1025 18:47:09.957493   82708 addons.go:69] Setting default-storageclass=true in profile "embed-certs-488000"
	I1025 18:47:09.957506   82708 addons.go:231] Setting addon metrics-server=true in "embed-certs-488000"
	W1025 18:47:09.957514   82708 addons.go:240] addon metrics-server should already be in state true
	I1025 18:47:09.957526   82708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-488000"
	I1025 18:47:09.957577   82708 host.go:66] Checking if "embed-certs-488000" exists ...
	I1025 18:47:09.957506   82708 addons.go:231] Setting addon dashboard=true in "embed-certs-488000"
	I1025 18:47:09.957595   82708 config.go:182] Loaded profile config "embed-certs-488000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	W1025 18:47:09.957602   82708 addons.go:240] addon dashboard should already be in state true
	I1025 18:47:09.957699   82708 host.go:66] Checking if "embed-certs-488000" exists ...
	I1025 18:47:09.957492   82708 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-488000"
	W1025 18:47:09.957719   82708 addons.go:240] addon storage-provisioner should already be in state true
	I1025 18:47:09.957751   82708 host.go:66] Checking if "embed-certs-488000" exists ...
	I1025 18:47:09.958010   82708 cli_runner.go:164] Run: docker container inspect embed-certs-488000 --format={{.State.Status}}
	I1025 18:47:09.958163   82708 cli_runner.go:164] Run: docker container inspect embed-certs-488000 --format={{.State.Status}}
	I1025 18:47:09.958239   82708 cli_runner.go:164] Run: docker container inspect embed-certs-488000 --format={{.State.Status}}
	I1025 18:47:09.959398   82708 cli_runner.go:164] Run: docker container inspect embed-certs-488000 --format={{.State.Status}}
	I1025 18:47:10.023564   82708 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-488000" context rescaled to 1 replicas
	I1025 18:47:10.023627   82708 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:47:10.047059   82708 out.go:177] * Verifying Kubernetes components...
	I1025 18:47:10.104181   82708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:47:10.174996   82708 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 18:47:10.115871   82708 addons.go:231] Setting addon default-storageclass=true in "embed-certs-488000"
	I1025 18:47:10.125791   82708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 18:47:10.134045   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:47:10.138018   82708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:47:10.219129   82708 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	W1025 18:47:10.219150   82708 addons.go:240] addon default-storageclass should already be in state true
	I1025 18:47:10.240053   82708 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1025 18:47:10.277287   82708 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 18:47:10.277283   82708 host.go:66] Checking if "embed-certs-488000" exists ...
	I1025 18:47:10.314340   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 18:47:10.314738   82708 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:47:10.336020   82708 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 18:47:10.336036   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 18:47:10.315701   82708 cli_runner.go:164] Run: docker container inspect embed-certs-488000 --format={{.State.Status}}
	I1025 18:47:10.336046   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 18:47:10.336103   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:47:10.336147   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:47:10.336165   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:47:10.377288   82708 node_ready.go:35] waiting up to 6m0s for node "embed-certs-488000" to be "Ready" ...
	I1025 18:47:10.437267   82708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60120 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/embed-certs-488000/id_rsa Username:docker}
	I1025 18:47:10.437365   82708 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 18:47:10.437380   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 18:47:10.437429   82708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60120 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/embed-certs-488000/id_rsa Username:docker}
	I1025 18:47:10.437456   82708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60120 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/embed-certs-488000/id_rsa Username:docker}
	I1025 18:47:10.437629   82708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-488000
	I1025 18:47:10.444464   82708 node_ready.go:49] node "embed-certs-488000" has status "Ready":"True"
	I1025 18:47:10.444500   82708 node_ready.go:38] duration metric: took 67.039332ms waiting for node "embed-certs-488000" to be "Ready" ...
	I1025 18:47:10.444511   82708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 18:47:10.500481   82708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60120 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/embed-certs-488000/id_rsa Username:docker}
	I1025 18:47:10.530786   82708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-z5lzk" in "kube-system" namespace to be "Ready" ...
	I1025 18:47:10.844117   82708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 18:47:10.844130   82708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:47:10.844142   82708 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 18:47:10.844155   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 18:47:10.849411   82708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 18:47:10.849434   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1025 18:47:11.020820   82708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 18:47:11.020866   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 18:47:11.023925   82708 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 18:47:11.023971   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 18:47:11.129971   82708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 18:47:11.129989   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 18:47:11.133654   82708 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 18:47:11.133675   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 18:47:11.330604   82708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 18:47:11.338227   82708 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 18:47:11.338248   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 18:47:11.436043   82708 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 18:47:11.436078   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 18:47:11.534542   82708 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 18:47:11.534559   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 18:47:11.633258   82708 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 18:47:11.633275   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 18:47:11.729488   82708 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 18:47:11.729508   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 18:47:11.827705   82708 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 18:47:11.827728   82708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 18:47:11.925060   82708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 18:47:12.644538   82708 pod_ready.go:102] pod "coredns-5dd5756b68-z5lzk" in "kube-system" namespace has status "Ready":"False"
	I1025 18:47:12.721117   82708 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.501887608s)
	I1025 18:47:12.721146   82708 start.go:926] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I1025 18:47:13.230744   82708 pod_ready.go:92] pod "coredns-5dd5756b68-z5lzk" in "kube-system" namespace has status "Ready":"True"
	I1025 18:47:13.230784   82708 pod_ready.go:81] duration metric: took 2.699895275s waiting for pod "coredns-5dd5756b68-z5lzk" in "kube-system" namespace to be "Ready" ...
	I1025 18:47:13.230801   82708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:47:13.241223   82708 pod_ready.go:92] pod "etcd-embed-certs-488000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:47:13.241241   82708 pod_ready.go:81] duration metric: took 10.432042ms waiting for pod "etcd-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:47:13.241251   82708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:47:13.329179   82708 pod_ready.go:92] pod "kube-apiserver-embed-certs-488000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:47:13.329201   82708 pod_ready.go:81] duration metric: took 87.933685ms waiting for pod "kube-apiserver-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:47:13.329214   82708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:47:13.334853   82708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.490614399s)
	I1025 18:47:13.334867   82708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.49064768s)
	I1025 18:47:13.338360   82708 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:47:13.338375   82708 pod_ready.go:81] duration metric: took 9.150664ms waiting for pod "kube-controller-manager-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:47:13.338392   82708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f7mst" in "kube-system" namespace to be "Ready" ...
	I1025 18:47:13.348432   82708 pod_ready.go:92] pod "kube-proxy-f7mst" in "kube-system" namespace has status "Ready":"True"
	I1025 18:47:13.348447   82708 pod_ready.go:81] duration metric: took 10.047098ms waiting for pod "kube-proxy-f7mst" in "kube-system" namespace to be "Ready" ...
	I1025 18:47:13.348464   82708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:47:13.428271   82708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.097567035s)
	I1025 18:47:13.428308   82708 addons.go:467] Verifying addon metrics-server=true in "embed-certs-488000"
	I1025 18:47:13.622433   82708 pod_ready.go:92] pod "kube-scheduler-embed-certs-488000" in "kube-system" namespace has status "Ready":"True"
	I1025 18:47:13.622450   82708 pod_ready.go:81] duration metric: took 273.970536ms waiting for pod "kube-scheduler-embed-certs-488000" in "kube-system" namespace to be "Ready" ...
	I1025 18:47:13.622458   82708 pod_ready.go:38] duration metric: took 3.177842815s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 18:47:13.622473   82708 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:47:13.622549   82708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:47:14.432864   82708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.507693832s)
	I1025 18:47:14.432897   82708 api_server.go:72] duration metric: took 4.409050244s to wait for apiserver process to appear ...
	I1025 18:47:14.432907   82708 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:47:14.432919   82708 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60124/healthz ...
	I1025 18:47:14.473242   82708 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-488000 addons enable metrics-server	
	
	
	I1025 18:47:14.439949   82708 api_server.go:279] https://127.0.0.1:60124/healthz returned 200:
	ok
	I1025 18:47:14.475407   82708 api_server.go:141] control plane version: v1.28.3
	I1025 18:47:14.508933   82708 api_server.go:131] duration metric: took 76.011671ms to wait for apiserver health ...
	I1025 18:47:14.508949   82708 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 18:47:14.547048   82708 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1025 18:47:14.590985   82708 addons.go:502] enable addons completed in 4.633444939s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1025 18:47:14.599504   82708 system_pods.go:59] 8 kube-system pods found
	I1025 18:47:14.599520   82708 system_pods.go:61] "coredns-5dd5756b68-z5lzk" [55ad886e-4ffa-4294-8804-c77be7d66ae0] Running
	I1025 18:47:14.599526   82708 system_pods.go:61] "etcd-embed-certs-488000" [dacc735e-221d-45ed-9257-b854d528f152] Running
	I1025 18:47:14.599530   82708 system_pods.go:61] "kube-apiserver-embed-certs-488000" [81098959-7502-4643-a5bc-d3bb69858679] Running
	I1025 18:47:14.599533   82708 system_pods.go:61] "kube-controller-manager-embed-certs-488000" [6a02cfa7-3f4d-46c8-8439-6d4aefab1806] Running
	I1025 18:47:14.599537   82708 system_pods.go:61] "kube-proxy-f7mst" [d882622c-ce90-44a5-9bb7-15a6b99b4529] Running
	I1025 18:47:14.599540   82708 system_pods.go:61] "kube-scheduler-embed-certs-488000" [21723c67-a1d4-4c8a-892e-974d9aa072e6] Running
	I1025 18:47:14.599544   82708 system_pods.go:61] "metrics-server-57f55c9bc5-9wbpw" [c5f96e40-cbae-4d41-8d21-a490c9260f59] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 18:47:14.599552   82708 system_pods.go:61] "storage-provisioner" [11aa987e-03c6-4103-8595-ca5bfe7e4341] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 18:47:14.599559   82708 system_pods.go:74] duration metric: took 90.601633ms to wait for pod list to return data ...
	I1025 18:47:14.599565   82708 default_sa.go:34] waiting for default service account to be created ...
	I1025 18:47:14.603023   82708 default_sa.go:45] found service account: "default"
	I1025 18:47:14.603037   82708 default_sa.go:55] duration metric: took 3.467961ms for default service account to be created ...
	I1025 18:47:14.603043   82708 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 18:47:14.608415   82708 system_pods.go:86] 8 kube-system pods found
	I1025 18:47:14.608428   82708 system_pods.go:89] "coredns-5dd5756b68-z5lzk" [55ad886e-4ffa-4294-8804-c77be7d66ae0] Running
	I1025 18:47:14.608432   82708 system_pods.go:89] "etcd-embed-certs-488000" [dacc735e-221d-45ed-9257-b854d528f152] Running
	I1025 18:47:14.608436   82708 system_pods.go:89] "kube-apiserver-embed-certs-488000" [81098959-7502-4643-a5bc-d3bb69858679] Running
	I1025 18:47:14.608440   82708 system_pods.go:89] "kube-controller-manager-embed-certs-488000" [6a02cfa7-3f4d-46c8-8439-6d4aefab1806] Running
	I1025 18:47:14.608444   82708 system_pods.go:89] "kube-proxy-f7mst" [d882622c-ce90-44a5-9bb7-15a6b99b4529] Running
	I1025 18:47:14.608447   82708 system_pods.go:89] "kube-scheduler-embed-certs-488000" [21723c67-a1d4-4c8a-892e-974d9aa072e6] Running
	I1025 18:47:14.608453   82708 system_pods.go:89] "metrics-server-57f55c9bc5-9wbpw" [c5f96e40-cbae-4d41-8d21-a490c9260f59] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 18:47:14.608461   82708 system_pods.go:89] "storage-provisioner" [11aa987e-03c6-4103-8595-ca5bfe7e4341] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 18:47:14.608467   82708 system_pods.go:126] duration metric: took 5.418732ms to wait for k8s-apps to be running ...
	I1025 18:47:14.608472   82708 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 18:47:14.608525   82708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:47:14.620861   82708 system_svc.go:56] duration metric: took 12.382094ms WaitForService to wait for kubelet.
	I1025 18:47:14.620875   82708 kubeadm.go:581] duration metric: took 4.597028561s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 18:47:14.620887   82708 node_conditions.go:102] verifying NodePressure condition ...
	I1025 18:47:14.624189   82708 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:47:14.624201   82708 node_conditions.go:123] node cpu capacity is 12
	I1025 18:47:14.624207   82708 node_conditions.go:105] duration metric: took 3.316253ms to run NodePressure ...
	I1025 18:47:14.624214   82708 start.go:228] waiting for startup goroutines ...
	I1025 18:47:14.624219   82708 start.go:233] waiting for cluster config update ...
	I1025 18:47:14.624230   82708 start.go:242] writing updated cluster config ...
	I1025 18:47:14.624607   82708 ssh_runner.go:195] Run: rm -f paused
	I1025 18:47:14.670285   82708 start.go:600] kubectl: 1.27.2, cluster: 1.28.3 (minor skew: 1)
	I1025 18:47:14.692210   82708 out.go:177] * Done! kubectl is now configured to use "embed-certs-488000" cluster and "default" namespace by default
	I1025 18:47:37.103667   82181 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 18:47:37.103918   82181 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 18:47:37.103933   82181 kubeadm.go:322] 
	I1025 18:47:37.103975   82181 kubeadm.go:322] Unfortunately, an error has occurred:
	I1025 18:47:37.104049   82181 kubeadm.go:322] 	timed out waiting for the condition
	I1025 18:47:37.104069   82181 kubeadm.go:322] 
	I1025 18:47:37.104118   82181 kubeadm.go:322] This error is likely caused by:
	I1025 18:47:37.104168   82181 kubeadm.go:322] 	- The kubelet is not running
	I1025 18:47:37.104285   82181 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 18:47:37.104294   82181 kubeadm.go:322] 
	I1025 18:47:37.104419   82181 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 18:47:37.104469   82181 kubeadm.go:322] 	- 'systemctl status kubelet'
	I1025 18:47:37.104497   82181 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I1025 18:47:37.104503   82181 kubeadm.go:322] 
	I1025 18:47:37.104575   82181 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 18:47:37.104647   82181 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1025 18:47:37.104731   82181 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I1025 18:47:37.104837   82181 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I1025 18:47:37.104921   82181 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I1025 18:47:37.104947   82181 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I1025 18:47:37.107205   82181 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1025 18:47:37.107291   82181 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1025 18:47:37.107439   82181 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I1025 18:47:37.107546   82181 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:47:37.107636   82181 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 18:47:37.107725   82181 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1025 18:47:37.107741   82181 kubeadm.go:406] StartCluster complete in 8m6.825271613s
	I1025 18:47:37.107874   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:47:37.141468   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.141533   82181 logs.go:286] No container was found matching "kube-apiserver"
	I1025 18:47:37.141666   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:47:37.172115   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.172129   82181 logs.go:286] No container was found matching "etcd"
	I1025 18:47:37.172199   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:47:37.194834   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.194847   82181 logs.go:286] No container was found matching "coredns"
	I1025 18:47:37.194905   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:47:37.218832   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.218848   82181 logs.go:286] No container was found matching "kube-scheduler"
	I1025 18:47:37.218917   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:47:37.256351   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.256366   82181 logs.go:286] No container was found matching "kube-proxy"
	I1025 18:47:37.256427   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:47:37.284502   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.284514   82181 logs.go:286] No container was found matching "kube-controller-manager"
	I1025 18:47:37.284567   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:47:37.308850   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.308866   82181 logs.go:286] No container was found matching "kindnet"
	I1025 18:47:37.308935   82181 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1025 18:47:37.341948   82181 logs.go:284] 0 containers: []
	W1025 18:47:37.341967   82181 logs.go:286] No container was found matching "kubernetes-dashboard"
	I1025 18:47:37.341977   82181 logs.go:123] Gathering logs for kubelet ...
	I1025 18:47:37.341987   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:47:37.400804   82181 logs.go:123] Gathering logs for dmesg ...
	I1025 18:47:37.400825   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:47:37.419687   82181 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:47:37.419708   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 18:47:37.497072   82181 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 18:47:37.497086   82181 logs.go:123] Gathering logs for Docker ...
	I1025 18:47:37.497106   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:47:37.514415   82181 logs.go:123] Gathering logs for container status ...
	I1025 18:47:37.514429   82181 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1025 18:47:37.585147   82181 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1025 18:47:37.585175   82181 out.go:239] * 
	W1025 18:47:37.585219   82181 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 18:47:37.585237   82181 out.go:239] * 
	W1025 18:47:37.585894   82181 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:47:37.652007   82181 out.go:177] 
	W1025 18:47:37.694124   82181 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 18:47:37.694171   82181 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1025 18:47:37.694185   82181 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1025 18:47:37.735911   82181 out.go:177] 
	
	* 
	* ==> Docker <==
	* Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.871242467Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.910518800Z" level=info msg="Loading containers: done."
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.919080909Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.919142665Z" level=info msg="Daemon has completed initialization"
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.951066840Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.951104585Z" level=info msg="API listen on [::]:2376"
	Oct 26 01:39:17 old-k8s-version-479000 systemd[1]: Started Docker Application Container Engine.
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: Stopping Docker Application Container Engine...
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:26.083136157Z" level=info msg="Processing signal 'terminated'"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:26.084131940Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:26.084231689Z" level=info msg="Daemon shutdown complete"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:26.084420982Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: docker.service: Deactivated successfully.
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: Stopped Docker Application Container Engine.
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: Starting Docker Application Container Engine...
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.154547096Z" level=info msg="Starting up"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.167667501Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.424134561Z" level=info msg="Loading containers: start."
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.517825027Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.557588670Z" level=info msg="Loading containers: done."
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.586622144Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.586688417Z" level=info msg="Daemon has completed initialization"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.620523760Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.620527338Z" level=info msg="API listen on [::]:2376"
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-10-26T01:47:39Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  01:47:40 up  1:10,  0 users,  load average: 1.06, 0.77, 1.03
	Linux old-k8s-version-479000 6.4.16-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Oct 10 20:42:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kubelet <==
	* Oct 26 01:47:38 old-k8s-version-479000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 26 01:47:39 old-k8s-version-479000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Oct 26 01:47:39 old-k8s-version-479000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 26 01:47:39 old-k8s-version-479000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 26 01:47:39 old-k8s-version-479000 kubelet[20469]: I1026 01:47:39.525456   20469 server.go:410] Version: v1.16.0
	Oct 26 01:47:39 old-k8s-version-479000 kubelet[20469]: I1026 01:47:39.525755   20469 plugins.go:100] No cloud provider specified.
	Oct 26 01:47:39 old-k8s-version-479000 kubelet[20469]: I1026 01:47:39.525776   20469 server.go:773] Client rotation is on, will bootstrap in background
	Oct 26 01:47:39 old-k8s-version-479000 kubelet[20469]: I1026 01:47:39.529219   20469 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 26 01:47:39 old-k8s-version-479000 kubelet[20469]: W1026 01:47:39.531039   20469 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Oct 26 01:47:39 old-k8s-version-479000 kubelet[20469]: W1026 01:47:39.531172   20469 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Oct 26 01:47:39 old-k8s-version-479000 kubelet[20469]: F1026 01:47:39.531237   20469 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Oct 26 01:47:39 old-k8s-version-479000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 26 01:47:39 old-k8s-version-479000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 26 01:47:40 old-k8s-version-479000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 157.
	Oct 26 01:47:40 old-k8s-version-479000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 26 01:47:40 old-k8s-version-479000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 26 01:47:40 old-k8s-version-479000 kubelet[20584]: I1026 01:47:40.252777   20584 server.go:410] Version: v1.16.0
	Oct 26 01:47:40 old-k8s-version-479000 kubelet[20584]: I1026 01:47:40.253075   20584 plugins.go:100] No cloud provider specified.
	Oct 26 01:47:40 old-k8s-version-479000 kubelet[20584]: I1026 01:47:40.253109   20584 server.go:773] Client rotation is on, will bootstrap in background
	Oct 26 01:47:40 old-k8s-version-479000 kubelet[20584]: I1026 01:47:40.256671   20584 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 26 01:47:40 old-k8s-version-479000 kubelet[20584]: W1026 01:47:40.257522   20584 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Oct 26 01:47:40 old-k8s-version-479000 kubelet[20584]: W1026 01:47:40.257598   20584 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Oct 26 01:47:40 old-k8s-version-479000 kubelet[20584]: F1026 01:47:40.257625   20584 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Oct 26 01:47:40 old-k8s-version-479000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 26 01:47:40 old-k8s-version-479000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:47:40.045218   82944 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-479000 -n old-k8s-version-479000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-479000 -n old-k8s-version-479000: exit status 2 (429.254319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-479000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (510.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:48:06.939756   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:48:23.912529   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/calico-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:49:12.908635   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:49:28.662321   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:49:46.958885   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/calico-143000/client.crt: no such file or directory
E1025 18:49:47.475624   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:49:55.646699   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:50:00.551325   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:50:23.335553   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:50:28.205331   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:50:35.956635   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:51:02.643361   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:51:10.524406   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:51:26.657611   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:51:51.194822   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 18:51:51.258670   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:52:20.993571   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:52:25.755595   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:52:31.719449   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:52:35.329385   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:53:06.949472   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:53:14.277485   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:53:23.921241   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/calico-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:53:44.040121   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:54:12.918564   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:54:28.672696   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:54:29.995270   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:54:55.656870   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:55:28.215781   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-479000 -n old-k8s-version-479000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-479000 -n old-k8s-version-479000: exit status 2 (385.335492ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-479000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-479000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-479000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69",
	        "Created": "2023-10-26T01:32:58.324650138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 334177,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:39:11.94787661Z",
	            "FinishedAt": "2023-10-26T01:39:09.148658914Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hosts",
	        "LogPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69-json.log",
	        "Name": "/old-k8s-version-479000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-479000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-479000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-479000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-479000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-479000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc3db0f18f0faa6596591e1d572ee41d081e2b2af745d61195c907cba1db1022",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59994"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59995"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59996"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59992"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59993"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dc3db0f18f0f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-479000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5e3f3c28e57c",
	                        "old-k8s-version-479000"
	                    ],
	                    "NetworkID": "e1c286b1eee5e63f7c876927f11c7e5f513aa124ea1227ec48978fbb98cbe026",
	                    "EndpointID": "a062e5ce1f7c9ea5b00721beec8298e5232dea7572107ad45a21b2733d6f4e61",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000: exit status 2 (383.482864ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-479000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-479000 logs -n 25: (1.424338614s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-488000                                  | embed-certs-488000           | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:47 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-488000                                  | embed-certs-488000           | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:47 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-488000                                  | embed-certs-488000           | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:47 PDT |
	| delete  | -p embed-certs-488000                                  | embed-certs-488000           | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:47 PDT |
	| delete  | -p                                                     | disable-driver-mounts-361000 | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:47 PDT |
	|         | disable-driver-mounts-361000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:48 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-555000  | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:49 PDT | 25 Oct 23 18:49 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:49 PDT | 25 Oct 23 18:49 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-555000       | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:49 PDT | 25 Oct 23 18:49 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:49 PDT | 25 Oct 23 18:54 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:54 PDT | 25 Oct 23 18:54 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:54 PDT | 25 Oct 23 18:54 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:54 PDT | 25 Oct 23 18:55 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:55 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:55 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-343000 --memory=2200 --alsologtostderr   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:55 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.28.3          |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-343000             | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:55 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-343000                                   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:55 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-343000                  | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:55 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-343000 --memory=2200 --alsologtostderr   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:56 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.28.3          |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-343000 sudo                              | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:56 PDT | 25 Oct 23 18:56 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-343000                                   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:56 PDT | 25 Oct 23 18:56 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-343000                                   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:56 PDT | 25 Oct 23 18:56 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-343000                                   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:56 PDT | 25 Oct 23 18:56 PDT |
	| delete  | -p newest-cni-343000                                   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:56 PDT | 25 Oct 23 18:56 PDT |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 18:55:54
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 18:55:54.072762   83628 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:55:54.073058   83628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:54.073063   83628 out.go:309] Setting ErrFile to fd 2...
	I1025 18:55:54.073068   83628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:54.073249   83628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 18:55:54.074665   83628 out.go:303] Setting JSON to false
	I1025 18:55:54.096368   83628 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":35722,"bootTime":1698249632,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 18:55:54.096469   83628 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 18:55:54.118141   83628 out.go:177] * [newest-cni-343000] minikube v1.31.2 on Darwin 14.0
	I1025 18:55:54.161731   83628 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 18:55:54.161816   83628 notify.go:220] Checking for updates...
	I1025 18:55:54.183969   83628 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:55:54.205953   83628 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 18:55:54.248790   83628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:55:54.269912   83628 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 18:55:54.291686   83628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:55:54.313443   83628 config.go:182] Loaded profile config "newest-cni-343000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:55:54.314214   83628 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 18:55:54.372912   83628 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 18:55:54.373058   83628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:55:54.471736   83628 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 01:55:54.459346701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:55:54.515303   83628 out.go:177] * Using the docker driver based on existing profile
	I1025 18:55:54.538356   83628 start.go:298] selected driver: docker
	I1025 18:55:54.538381   83628 start.go:902] validating driver "docker" against &{Name:newest-cni-343000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-343000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:55:54.538506   83628 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:55:54.543006   83628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:55:54.643391   83628 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 01:55:54.632365055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:55:54.643656   83628 start_flags.go:945] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 18:55:54.643723   83628 cni.go:84] Creating CNI manager for ""
	I1025 18:55:54.643737   83628 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:55:54.643748   83628 start_flags.go:323] config:
	{Name:newest-cni-343000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-343000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:55:54.686113   83628 out.go:177] * Starting control plane node newest-cni-343000 in cluster newest-cni-343000
	I1025 18:55:54.707241   83628 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 18:55:54.728244   83628 out.go:177] * Pulling base image ...
	I1025 18:55:54.771132   83628 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:55:54.771161   83628 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 18:55:54.771201   83628 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 18:55:54.771213   83628 cache.go:56] Caching tarball of preloaded images
	I1025 18:55:54.771306   83628 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 18:55:54.771318   83628 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 18:55:54.771411   83628 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/config.json ...
	I1025 18:55:54.821685   83628 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 18:55:54.821707   83628 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 18:55:54.821729   83628 cache.go:194] Successfully downloaded all kic artifacts
	I1025 18:55:54.821778   83628 start.go:365] acquiring machines lock for newest-cni-343000: {Name:mk525e2f0aa53f8504b24dbafcf08d912d8d647f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:54.821862   83628 start.go:369] acquired machines lock for "newest-cni-343000" in 55.083µs
	I1025 18:55:54.821883   83628 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:55:54.821892   83628 fix.go:54] fixHost starting: 
	I1025 18:55:54.822124   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:55:54.873572   83628 fix.go:102] recreateIfNeeded on newest-cni-343000: state=Stopped err=<nil>
	W1025 18:55:54.873615   83628 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 18:55:54.895203   83628 out.go:177] * Restarting existing docker container for "newest-cni-343000" ...
	I1025 18:55:54.937240   83628 cli_runner.go:164] Run: docker start newest-cni-343000
	I1025 18:55:55.225147   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:55:55.282680   83628 kic.go:427] container "newest-cni-343000" state is running.
	I1025 18:55:55.283250   83628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-343000
	I1025 18:55:55.400408   83628 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/config.json ...
	I1025 18:55:55.401044   83628 machine.go:88] provisioning docker machine ...
	I1025 18:55:55.401107   83628 ubuntu.go:169] provisioning hostname "newest-cni-343000"
	I1025 18:55:55.401252   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:55.478148   83628 main.go:141] libmachine: Using SSH client type: native
	I1025 18:55:55.478532   83628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60918 <nil> <nil>}
	I1025 18:55:55.478553   83628 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-343000 && echo "newest-cni-343000" | sudo tee /etc/hostname
	I1025 18:55:55.764837   83628 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-343000
	
	I1025 18:55:55.764975   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:55.820876   83628 main.go:141] libmachine: Using SSH client type: native
	I1025 18:55:55.821184   83628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60918 <nil> <nil>}
	I1025 18:55:55.821198   83628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-343000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-343000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-343000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:55:55.950914   83628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:55:55.950935   83628 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 18:55:55.950953   83628 ubuntu.go:177] setting up certificates
	I1025 18:55:55.950961   83628 provision.go:83] configureAuth start
	I1025 18:55:55.951047   83628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-343000
	I1025 18:55:56.006745   83628 provision.go:138] copyHostCerts
	I1025 18:55:56.006848   83628 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 18:55:56.006859   83628 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:55:56.006987   83628 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 18:55:56.007203   83628 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 18:55:56.007211   83628 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:55:56.007314   83628 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 18:55:56.007509   83628 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 18:55:56.007515   83628 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:55:56.007581   83628 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 18:55:56.007723   83628 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.newest-cni-343000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-343000]
	I1025 18:55:56.145369   83628 provision.go:172] copyRemoteCerts
	I1025 18:55:56.145426   83628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:55:56.145482   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:56.197815   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:55:56.289462   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 18:55:56.312220   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 18:55:56.335409   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1025 18:55:56.359068   83628 provision.go:86] duration metric: configureAuth took 408.069528ms
	I1025 18:55:56.359081   83628 ubuntu.go:193] setting minikube options for container-runtime
	I1025 18:55:56.359223   83628 config.go:182] Loaded profile config "newest-cni-343000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:55:56.359288   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:56.411212   83628 main.go:141] libmachine: Using SSH client type: native
	I1025 18:55:56.411519   83628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60918 <nil> <nil>}
	I1025 18:55:56.411535   83628 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:55:56.535312   83628 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 18:55:56.535334   83628 ubuntu.go:71] root file system type: overlay
	I1025 18:55:56.535449   83628 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:55:56.535565   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:56.586897   83628 main.go:141] libmachine: Using SSH client type: native
	I1025 18:55:56.587221   83628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60918 <nil> <nil>}
	I1025 18:55:56.587271   83628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:55:56.722068   83628 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:55:56.722181   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:56.773506   83628 main.go:141] libmachine: Using SSH client type: native
	I1025 18:55:56.773805   83628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60918 <nil> <nil>}
	I1025 18:55:56.773818   83628 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:55:56.902383   83628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:55:56.902400   83628 machine.go:91] provisioned docker machine in 1.501303619s
	I1025 18:55:56.902410   83628 start.go:300] post-start starting for "newest-cni-343000" (driver="docker")
	I1025 18:55:56.902420   83628 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:55:56.902484   83628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:55:56.902544   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:56.955476   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:55:57.045120   83628 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:55:57.049796   83628 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 18:55:57.049818   83628 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 18:55:57.049826   83628 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 18:55:57.049834   83628 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 18:55:57.049845   83628 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 18:55:57.049941   83628 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 18:55:57.050097   83628 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 18:55:57.050245   83628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:55:57.059521   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:55:57.083621   83628 start.go:303] post-start completed in 181.193525ms
	I1025 18:55:57.083725   83628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 18:55:57.083798   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:57.136368   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:55:57.224473   83628 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 18:55:57.229867   83628 fix.go:56] fixHost completed within 2.407902833s
	I1025 18:55:57.229888   83628 start.go:83] releasing machines lock for "newest-cni-343000", held for 2.40794597s
	I1025 18:55:57.229971   83628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-343000
	I1025 18:55:57.281725   83628 ssh_runner.go:195] Run: cat /version.json
	I1025 18:55:57.281756   83628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:55:57.281803   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:57.281834   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:57.338289   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:55:57.338288   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:55:57.426142   83628 ssh_runner.go:195] Run: systemctl --version
	I1025 18:55:57.536871   83628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 18:55:57.544691   83628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 18:55:57.565617   83628 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 18:55:57.565692   83628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 18:55:57.575510   83628 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 18:55:57.575525   83628 start.go:472] detecting cgroup driver to use...
	I1025 18:55:57.575543   83628 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:55:57.575672   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:55:57.592100   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 18:55:57.602562   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:55:57.613258   83628 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:55:57.613317   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:55:57.623970   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:55:57.634887   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:55:57.645559   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:55:57.656324   83628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:55:57.666666   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:55:57.677406   83628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:55:57.687803   83628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:55:57.697996   83628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:55:57.762418   83628 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:55:57.839749   83628 start.go:472] detecting cgroup driver to use...
	I1025 18:55:57.839768   83628 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:55:57.839830   83628 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:55:57.853291   83628 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 18:55:57.853397   83628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:55:57.866827   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:55:57.886307   83628 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:55:57.891831   83628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:55:57.902979   83628 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:55:57.949564   83628 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:55:58.076472   83628 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:55:58.171405   83628 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:55:58.171490   83628 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:55:58.189694   83628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:55:58.276780   83628 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:55:58.579112   83628 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:55:58.643008   83628 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 18:55:58.705609   83628 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:55:58.767844   83628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:55:58.830330   83628 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 18:55:58.864744   83628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:55:58.926086   83628 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 18:55:59.018241   83628 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 18:55:59.018332   83628 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 18:55:59.023587   83628 start.go:540] Will wait 60s for crictl version
	I1025 18:55:59.023664   83628 ssh_runner.go:195] Run: which crictl
	I1025 18:55:59.028541   83628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 18:55:59.077066   83628 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 18:55:59.098819   83628 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:55:59.125778   83628 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:55:59.173927   83628 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 18:55:59.174008   83628 cli_runner.go:164] Run: docker exec -t newest-cni-343000 dig +short host.docker.internal
	I1025 18:55:59.306036   83628 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 18:55:59.306134   83628 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 18:55:59.311249   83628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:55:59.323383   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:59.397991   83628 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 18:55:59.419695   83628 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:55:59.419831   83628 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:55:59.442292   83628 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:55:59.442314   83628 docker.go:623] Images already preloaded, skipping extraction
	I1025 18:55:59.442394   83628 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:55:59.463437   83628 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:55:59.463467   83628 cache_images.go:84] Images are preloaded, skipping loading
	I1025 18:55:59.463552   83628 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:55:59.515566   83628 cni.go:84] Creating CNI manager for ""
	I1025 18:55:59.515583   83628 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:55:59.515599   83628 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1025 18:55:59.515617   83628 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-343000 NodeName:newest-cni-343000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 18:55:59.515762   83628 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-343000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:55:59.515831   83628 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-343000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:newest-cni-343000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 18:55:59.515890   83628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 18:55:59.525585   83628 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:55:59.525655   83628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 18:55:59.535029   83628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I1025 18:55:59.552225   83628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:55:59.569852   83628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1025 18:55:59.587449   83628 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 18:55:59.592521   83628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:55:59.604295   83628 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000 for IP: 192.168.76.2
	I1025 18:55:59.604315   83628 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:55:59.604472   83628 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 18:55:59.604517   83628 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 18:55:59.604606   83628 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/client.key
	I1025 18:55:59.604692   83628 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/apiserver.key.31bdca25
	I1025 18:55:59.604741   83628 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/proxy-client.key
	I1025 18:55:59.604941   83628 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 18:55:59.604975   83628 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 18:55:59.604984   83628 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 18:55:59.605018   83628 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 18:55:59.605050   83628 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:55:59.605081   83628 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 18:55:59.605149   83628 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:55:59.605685   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 18:55:59.629985   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 18:55:59.654097   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 18:55:59.677589   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 18:55:59.701702   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:55:59.725542   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 18:55:59.749023   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:55:59.772833   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:55:59.796924   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 18:55:59.820176   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:55:59.843064   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 18:55:59.866558   83628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 18:55:59.883968   83628 ssh_runner.go:195] Run: openssl version
	I1025 18:55:59.889902   83628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 18:55:59.900635   83628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 18:55:59.905322   83628 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:55:59.905367   83628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 18:55:59.912594   83628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 18:55:59.922671   83628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 18:55:59.933533   83628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 18:55:59.938562   83628 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:55:59.938631   83628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 18:55:59.946345   83628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:55:59.957161   83628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:55:59.969095   83628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:55:59.974470   83628 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:55:59.974538   83628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:55:59.982680   83628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:55:59.993297   83628 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 18:55:59.999062   83628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 18:56:00.006974   83628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 18:56:00.014625   83628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 18:56:00.023528   83628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 18:56:00.031235   83628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 18:56:00.039064   83628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 18:56:00.046330   83628 kubeadm.go:404] StartCluster: {Name:newest-cni-343000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-343000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:56:00.046516   83628 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:56:00.067500   83628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 18:56:00.077603   83628 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1025 18:56:00.077627   83628 kubeadm.go:636] restartCluster start
	I1025 18:56:00.077706   83628 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 18:56:00.087287   83628 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:00.087360   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:56:00.141247   83628 kubeconfig.go:135] verify returned: extract IP: "newest-cni-343000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:56:00.141401   83628 kubeconfig.go:146] "newest-cni-343000" context is missing from /Users/jenkins/minikube-integration/17488-64832/kubeconfig - will repair!
	I1025 18:56:00.141719   83628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/kubeconfig: {Name:mka2fd80159d21a18312620daab0f942465327a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:56:00.143289   83628 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 18:56:00.153344   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:00.153399   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:00.164003   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:00.164012   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:00.164059   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:00.174525   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:00.675066   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:00.675189   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:00.687491   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:01.175159   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:01.175376   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:01.188852   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:01.675474   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:01.675646   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:01.688324   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:02.174891   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:02.175088   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:02.188385   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:02.676169   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:02.676284   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:02.689171   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:03.176203   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:03.176458   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:03.189766   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:03.674857   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:03.674979   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:03.686811   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:04.174790   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:04.174900   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:04.187666   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:04.675371   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:04.675511   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:04.688459   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:05.174968   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:05.175085   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:05.187491   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:05.674886   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:05.674982   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:05.687406   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:06.174838   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:06.174960   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:06.186729   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:06.675115   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:06.675192   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:06.687017   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:07.176926   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:07.177046   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:07.189845   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:07.675528   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:07.675729   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:07.688506   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:08.174882   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:08.174980   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:08.187219   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:08.676335   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:08.676394   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:08.687635   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:09.176419   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:09.176559   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:09.189552   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:09.676430   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:09.676532   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:09.689633   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:10.154887   83628 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1025 18:56:10.155034   83628 kubeadm.go:1128] stopping kube-system containers ...
	I1025 18:56:10.155152   83628 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:56:10.179525   83628 docker.go:464] Stopping containers: [0fb1176339f8 7a209347c89e 48ca9d47f88b 8f39368b7272 8f57982e96e1 2dda95220d25 24c40b23c57a 8583fe5b9f37 d4d447fb063a 4c0c70614002 3128acab8e29 0b0f38caf416 6ad0e95c4260 1bf6f80b8b9a]
	I1025 18:56:10.179609   83628 ssh_runner.go:195] Run: docker stop 0fb1176339f8 7a209347c89e 48ca9d47f88b 8f39368b7272 8f57982e96e1 2dda95220d25 24c40b23c57a 8583fe5b9f37 d4d447fb063a 4c0c70614002 3128acab8e29 0b0f38caf416 6ad0e95c4260 1bf6f80b8b9a
	I1025 18:56:10.201404   83628 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 18:56:10.214357   83628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:56:10.223977   83628 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct 26 01:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 26 01:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct 26 01:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Oct 26 01:55 /etc/kubernetes/scheduler.conf
	
	I1025 18:56:10.224041   83628 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 18:56:10.233322   83628 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 18:56:10.242555   83628 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 18:56:10.251667   83628 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:10.251725   83628 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 18:56:10.260771   83628 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 18:56:10.270252   83628 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:10.270320   83628 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 18:56:10.279515   83628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:56:10.288945   83628 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1025 18:56:10.288958   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:56:10.340741   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:56:10.889370   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:56:11.026635   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:56:11.086230   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:56:11.180272   83628 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:56:11.180408   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:56:11.251151   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:56:11.769850   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:56:12.271379   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:56:12.358230   83628 api_server.go:72] duration metric: took 1.177923123s to wait for apiserver process to appear ...
	I1025 18:56:12.358247   83628 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:56:12.358273   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:12.359959   83628 api_server.go:269] stopped: https://127.0.0.1:60922/healthz: Get "https://127.0.0.1:60922/healthz": EOF
	I1025 18:56:12.359983   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:12.361083   83628 api_server.go:269] stopped: https://127.0.0.1:60922/healthz: Get "https://127.0.0.1:60922/healthz": EOF
	I1025 18:56:12.861296   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:14.751873   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 18:56:14.751910   83628 api_server.go:103] status: https://127.0.0.1:60922/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 18:56:14.751922   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:14.850117   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:56:14.850146   83628 api_server.go:103] status: https://127.0.0.1:60922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:56:14.861299   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:14.869466   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:56:14.869491   83628 api_server.go:103] status: https://127.0.0.1:60922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:56:15.362007   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:15.367188   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:56:15.367204   83628 api_server.go:103] status: https://127.0.0.1:60922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:56:15.861775   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:15.870218   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:56:15.870258   83628 api_server.go:103] status: https://127.0.0.1:60922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:56:16.361388   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:16.370915   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:56:16.370947   83628 api_server.go:103] status: https://127.0.0.1:60922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:56:16.861469   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:16.868545   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 200:
	ok
	I1025 18:56:16.877392   83628 api_server.go:141] control plane version: v1.28.3
	I1025 18:56:16.877408   83628 api_server.go:131] duration metric: took 4.519020322s to wait for apiserver health ...
	I1025 18:56:16.877415   83628 cni.go:84] Creating CNI manager for ""
	I1025 18:56:16.877425   83628 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:56:16.899903   83628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 18:56:16.921823   83628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 18:56:16.932679   83628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1025 18:56:16.950030   83628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 18:56:16.958679   83628 system_pods.go:59] 8 kube-system pods found
	I1025 18:56:16.958697   83628 system_pods.go:61] "coredns-5dd5756b68-jl7lg" [bd867f5e-4a47-4512-ba89-96b32dbfe9a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:56:16.958716   83628 system_pods.go:61] "etcd-newest-cni-343000" [4f259ab6-2d46-4eed-ae0f-6f97154458e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 18:56:16.958728   83628 system_pods.go:61] "kube-apiserver-newest-cni-343000" [f323d009-d5c3-4f20-a607-f9cff2b446b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 18:56:16.958739   83628 system_pods.go:61] "kube-controller-manager-newest-cni-343000" [9c7c0ea2-932b-42b6-b78e-c030b0509dae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 18:56:16.958747   83628 system_pods.go:61] "kube-proxy-jbcmw" [f8998e55-1244-46d7-959c-7c635e823a81] Running
	I1025 18:56:16.958752   83628 system_pods.go:61] "kube-scheduler-newest-cni-343000" [7c5468d6-8f17-48ee-9d9d-af81e174dd04] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 18:56:16.958758   83628 system_pods.go:61] "metrics-server-57f55c9bc5-qh9cv" [8c87c66f-3d51-4387-a715-57e7f065731b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 18:56:16.958763   83628 system_pods.go:61] "storage-provisioner" [690b9c44-928a-466b-8dd2-09177d72006b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 18:56:16.958768   83628 system_pods.go:74] duration metric: took 8.725743ms to wait for pod list to return data ...
	I1025 18:56:16.958776   83628 node_conditions.go:102] verifying NodePressure condition ...
	I1025 18:56:16.962047   83628 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:56:16.962062   83628 node_conditions.go:123] node cpu capacity is 12
	I1025 18:56:16.962073   83628 node_conditions.go:105] duration metric: took 3.293635ms to run NodePressure ...
	I1025 18:56:16.962085   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:56:17.146763   83628 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 18:56:17.156079   83628 ops.go:34] apiserver oom_adj: -16
	I1025 18:56:17.156096   83628 kubeadm.go:640] restartCluster took 17.077946084s
	I1025 18:56:17.156104   83628 kubeadm.go:406] StartCluster complete in 17.109266639s
	I1025 18:56:17.156119   83628 settings.go:142] acquiring lock: {Name:mkca0a8fe84aa865309571104a1d51551b90d38c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:56:17.156198   83628 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:56:17.156798   83628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/kubeconfig: {Name:mka2fd80159d21a18312620daab0f942465327a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:56:17.157062   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 18:56:17.157080   83628 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 18:56:17.157133   83628 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-343000"
	I1025 18:56:17.157156   83628 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-343000"
	W1025 18:56:17.157164   83628 addons.go:240] addon storage-provisioner should already be in state true
	I1025 18:56:17.157193   83628 addons.go:69] Setting default-storageclass=true in profile "newest-cni-343000"
	I1025 18:56:17.157207   83628 addons.go:69] Setting dashboard=true in profile "newest-cni-343000"
	I1025 18:56:17.157215   83628 addons.go:69] Setting metrics-server=true in profile "newest-cni-343000"
	I1025 18:56:17.157223   83628 addons.go:231] Setting addon metrics-server=true in "newest-cni-343000"
	W1025 18:56:17.157228   83628 addons.go:240] addon metrics-server should already be in state true
	I1025 18:56:17.157226   83628 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-343000"
	I1025 18:56:17.157209   83628 host.go:66] Checking if "newest-cni-343000" exists ...
	I1025 18:56:17.157250   83628 config.go:182] Loaded profile config "newest-cni-343000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:56:17.157255   83628 host.go:66] Checking if "newest-cni-343000" exists ...
	I1025 18:56:17.157229   83628 addons.go:231] Setting addon dashboard=true in "newest-cni-343000"
	W1025 18:56:17.157285   83628 addons.go:240] addon dashboard should already be in state true
	I1025 18:56:17.157353   83628 host.go:66] Checking if "newest-cni-343000" exists ...
	I1025 18:56:17.157494   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:56:17.157625   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:56:17.158417   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:56:17.158741   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:56:17.166844   83628 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-343000" context rescaled to 1 replicas
	I1025 18:56:17.166888   83628 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:56:17.204453   83628 out.go:177] * Verifying Kubernetes components...
	I1025 18:56:17.264394   83628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:56:17.265000   83628 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1025 18:56:17.297046   83628 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 18:56:17.276641   83628 addons.go:231] Setting addon default-storageclass=true in "newest-cni-343000"
	I1025 18:56:17.291495   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:56:17.318100   83628 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	W1025 18:56:17.318109   83628 addons.go:240] addon default-storageclass should already be in state true
	I1025 18:56:17.360012   83628 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:56:17.360108   83628 host.go:66] Checking if "newest-cni-343000" exists ...
	I1025 18:56:17.397082   83628 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1025 18:56:17.418276   83628 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 18:56:17.418881   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:56:17.455018   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 18:56:17.492249   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 18:56:17.492275   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 18:56:17.455165   83628 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:56:17.492322   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 18:56:17.492379   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:56:17.492383   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:56:17.492456   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:56:17.495512   83628 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:56:17.495707   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:56:17.517349   83628 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 18:56:17.517371   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 18:56:17.517510   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:56:17.520068   83628 api_server.go:72] duration metric: took 353.129672ms to wait for apiserver process to appear ...
	I1025 18:56:17.520093   83628 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:56:17.520117   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:17.527729   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 200:
	ok
	I1025 18:56:17.530665   83628 api_server.go:141] control plane version: v1.28.3
	I1025 18:56:17.530688   83628 api_server.go:131] duration metric: took 10.586938ms to wait for apiserver health ...
	I1025 18:56:17.530702   83628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 18:56:17.541232   83628 system_pods.go:59] 8 kube-system pods found
	I1025 18:56:17.541262   83628 system_pods.go:61] "coredns-5dd5756b68-jl7lg" [bd867f5e-4a47-4512-ba89-96b32dbfe9a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:56:17.541273   83628 system_pods.go:61] "etcd-newest-cni-343000" [4f259ab6-2d46-4eed-ae0f-6f97154458e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 18:56:17.541285   83628 system_pods.go:61] "kube-apiserver-newest-cni-343000" [f323d009-d5c3-4f20-a607-f9cff2b446b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 18:56:17.541293   83628 system_pods.go:61] "kube-controller-manager-newest-cni-343000" [9c7c0ea2-932b-42b6-b78e-c030b0509dae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 18:56:17.541301   83628 system_pods.go:61] "kube-proxy-jbcmw" [f8998e55-1244-46d7-959c-7c635e823a81] Running
	I1025 18:56:17.541309   83628 system_pods.go:61] "kube-scheduler-newest-cni-343000" [7c5468d6-8f17-48ee-9d9d-af81e174dd04] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 18:56:17.541316   83628 system_pods.go:61] "metrics-server-57f55c9bc5-qh9cv" [8c87c66f-3d51-4387-a715-57e7f065731b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 18:56:17.541323   83628 system_pods.go:61] "storage-provisioner" [690b9c44-928a-466b-8dd2-09177d72006b] Running
	I1025 18:56:17.541329   83628 system_pods.go:74] duration metric: took 10.61845ms to wait for pod list to return data ...
	I1025 18:56:17.541336   83628 default_sa.go:34] waiting for default service account to be created ...
	I1025 18:56:17.546179   83628 default_sa.go:45] found service account: "default"
	I1025 18:56:17.546207   83628 default_sa.go:55] duration metric: took 4.852014ms for default service account to be created ...
	I1025 18:56:17.546221   83628 kubeadm.go:581] duration metric: took 379.288312ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1025 18:56:17.546244   83628 node_conditions.go:102] verifying NodePressure condition ...
	I1025 18:56:17.551342   83628 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:56:17.551358   83628 node_conditions.go:123] node cpu capacity is 12
	I1025 18:56:17.551368   83628 node_conditions.go:105] duration metric: took 5.11922ms to run NodePressure ...
	I1025 18:56:17.551379   83628 start.go:228] waiting for startup goroutines ...
	I1025 18:56:17.572004   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:56:17.573779   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:56:17.573818   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:56:17.595685   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:56:17.680951   83628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 18:56:17.680961   83628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:56:17.680964   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1025 18:56:17.681151   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 18:56:17.681167   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 18:56:17.701617   83628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 18:56:17.702542   83628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 18:56:17.702556   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 18:56:17.702650   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 18:56:17.702663   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 18:56:17.755506   83628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 18:56:17.755522   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 18:56:17.755623   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 18:56:17.755634   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 18:56:17.778856   83628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 18:56:17.779265   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 18:56:17.779280   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 18:56:17.863826   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 18:56:17.863846   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 18:56:17.956157   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 18:56:17.956172   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 18:56:17.980204   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 18:56:17.980225   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 18:56:18.059494   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 18:56:18.059511   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 18:56:18.078787   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 18:56:18.078801   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 18:56:18.153309   83628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 18:56:19.055155   83628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.374110435s)
	I1025 18:56:19.055169   83628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.353496248s)
	I1025 18:56:19.167938   83628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.389008032s)
	I1025 18:56:19.167962   83628 addons.go:467] Verifying addon metrics-server=true in "newest-cni-343000"
	I1025 18:56:19.454898   83628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.301521131s)
	I1025 18:56:19.478933   83628 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-343000 addons enable metrics-server	
	
	
	I1025 18:56:19.540100   83628 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1025 18:56:19.582739   83628 addons.go:502] enable addons completed in 2.42559116s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1025 18:56:19.582772   83628 start.go:233] waiting for cluster config update ...
	I1025 18:56:19.582791   83628 start.go:242] writing updated cluster config ...
	I1025 18:56:19.583253   83628 ssh_runner.go:195] Run: rm -f paused
	I1025 18:56:19.623536   83628 start.go:600] kubectl: 1.27.2, cluster: 1.28.3 (minor skew: 1)
	I1025 18:56:19.644979   83628 out.go:177] * Done! kubectl is now configured to use "newest-cni-343000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.871242467Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.910518800Z" level=info msg="Loading containers: done."
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.919080909Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.919142665Z" level=info msg="Daemon has completed initialization"
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.951066840Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.951104585Z" level=info msg="API listen on [::]:2376"
	Oct 26 01:39:17 old-k8s-version-479000 systemd[1]: Started Docker Application Container Engine.
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: Stopping Docker Application Container Engine...
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:26.083136157Z" level=info msg="Processing signal 'terminated'"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:26.084131940Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:26.084231689Z" level=info msg="Daemon shutdown complete"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:26.084420982Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: docker.service: Deactivated successfully.
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: Stopped Docker Application Container Engine.
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: Starting Docker Application Container Engine...
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.154547096Z" level=info msg="Starting up"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.167667501Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.424134561Z" level=info msg="Loading containers: start."
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.517825027Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.557588670Z" level=info msg="Loading containers: done."
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.586622144Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.586688417Z" level=info msg="Daemon has completed initialization"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.620523760Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.620527338Z" level=info msg="API listen on [::]:2376"
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* time="2023-10-26T01:56:43Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  01:56:43 up  1:19,  0 users,  load average: 0.89, 0.81, 0.91
	Linux old-k8s-version-479000 6.4.16-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Oct 10 20:42:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kubelet <==
	* Oct 26 01:56:41 old-k8s-version-479000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 26 01:56:42 old-k8s-version-479000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 879.
	Oct 26 01:56:42 old-k8s-version-479000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 26 01:56:42 old-k8s-version-479000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 26 01:56:42 old-k8s-version-479000 kubelet[33487]: I1026 01:56:42.516697   33487 server.go:410] Version: v1.16.0
	Oct 26 01:56:42 old-k8s-version-479000 kubelet[33487]: I1026 01:56:42.516875   33487 plugins.go:100] No cloud provider specified.
	Oct 26 01:56:42 old-k8s-version-479000 kubelet[33487]: I1026 01:56:42.516885   33487 server.go:773] Client rotation is on, will bootstrap in background
	Oct 26 01:56:42 old-k8s-version-479000 kubelet[33487]: I1026 01:56:42.518637   33487 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 26 01:56:42 old-k8s-version-479000 kubelet[33487]: W1026 01:56:42.519250   33487 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Oct 26 01:56:42 old-k8s-version-479000 kubelet[33487]: W1026 01:56:42.519319   33487 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Oct 26 01:56:42 old-k8s-version-479000 kubelet[33487]: F1026 01:56:42.519343   33487 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Oct 26 01:56:42 old-k8s-version-479000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 26 01:56:42 old-k8s-version-479000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 26 01:56:43 old-k8s-version-479000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 880.
	Oct 26 01:56:43 old-k8s-version-479000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 26 01:56:43 old-k8s-version-479000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 26 01:56:43 old-k8s-version-479000 kubelet[33579]: I1026 01:56:43.274310   33579 server.go:410] Version: v1.16.0
	Oct 26 01:56:43 old-k8s-version-479000 kubelet[33579]: I1026 01:56:43.274536   33579 plugins.go:100] No cloud provider specified.
	Oct 26 01:56:43 old-k8s-version-479000 kubelet[33579]: I1026 01:56:43.274548   33579 server.go:773] Client rotation is on, will bootstrap in background
	Oct 26 01:56:43 old-k8s-version-479000 kubelet[33579]: I1026 01:56:43.276153   33579 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 26 01:56:43 old-k8s-version-479000 kubelet[33579]: W1026 01:56:43.276765   33579 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Oct 26 01:56:43 old-k8s-version-479000 kubelet[33579]: W1026 01:56:43.276830   33579 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Oct 26 01:56:43 old-k8s-version-479000 kubelet[33579]: F1026 01:56:43.276857   33579 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Oct 26 01:56:43 old-k8s-version-479000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 26 01:56:43 old-k8s-version-479000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:56:43.503580   83863 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-479000 -n old-k8s-version-479000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-479000 -n old-k8s-version-479000: exit status 2 (387.826092ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-479000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (373.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1025 18:56:51.099778   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:57:20.897286   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:57:35.232679   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:58:06.852752   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:58:23.824380   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/calico-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:58:58.316933   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 18:58:59.652938   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
E1025 18:58:59.658014   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
E1025 18:58:59.669097   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
E1025 18:58:59.689301   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
E1025 18:58:59.730307   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
E1025 18:58:59.811148   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
E1025 18:58:59.973418   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
E1025 18:59:00.294091   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
E1025 18:59:00.936482   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
E1025 18:59:02.217629   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
E1025 18:59:04.777773   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:59:09.900041   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
E1025 18:59:12.817501   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:59:20.142343   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:59:28.570850   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:59:40.623225   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 18:59:47.382393   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:59:55.553433   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 19:00:00.456792   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 19:00:21.584273   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 19:00:28.110869   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 19:01:02.548105   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 19:01:18.602316   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 19:01:26.562706   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 19:01:43.505998   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/default-k8s-diff-port-555000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 19:01:51.097865   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 19:02:20.896397   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1025 19:02:35.230168   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59993/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-479000 -n old-k8s-version-479000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-479000 -n old-k8s-version-479000: exit status 2 (409.121352ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-479000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-479000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-479000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.93µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-479000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-479000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-479000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69",
	        "Created": "2023-10-26T01:32:58.324650138Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 334177,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-26T01:39:11.94787661Z",
	            "FinishedAt": "2023-10-26T01:39:09.148658914Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/hosts",
	        "LogPath": "/var/lib/docker/containers/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69/5e3f3c28e57cb270f49205eeb37ac08f10551bd5b9480af216c9e9d4af914f69-json.log",
	        "Name": "/old-k8s-version-479000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-479000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-479000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b-init/diff:/var/lib/docker/overlay2/d80c3c6ebb3e22fc0994c621eeb60a01efaecbf75cf8c7e33299fa73160e5f82/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38224b4095bfa384a8392fe28fd4684bbed1e685b1da03f4bd770e877c6a5c2b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-479000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-479000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-479000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-479000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc3db0f18f0faa6596591e1d572ee41d081e2b2af745d61195c907cba1db1022",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59994"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59995"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59996"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59992"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59993"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dc3db0f18f0f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-479000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5e3f3c28e57c",
	                        "old-k8s-version-479000"
	                    ],
	                    "NetworkID": "e1c286b1eee5e63f7c876927f11c7e5f513aa124ea1227ec48978fbb98cbe026",
	                    "EndpointID": "a062e5ce1f7c9ea5b00721beec8298e5232dea7572107ad45a21b2733d6f4e61",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000: exit status 2 (442.916871ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-479000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-479000 logs -n 25: (1.439508063s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-488000                                  | embed-certs-488000           | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:47 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-488000                                  | embed-certs-488000           | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:47 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-488000                                  | embed-certs-488000           | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:47 PDT |
	| delete  | -p embed-certs-488000                                  | embed-certs-488000           | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:47 PDT |
	| delete  | -p                                                     | disable-driver-mounts-361000 | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:47 PDT |
	|         | disable-driver-mounts-361000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:47 PDT | 25 Oct 23 18:48 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-555000  | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:49 PDT | 25 Oct 23 18:49 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:49 PDT | 25 Oct 23 18:49 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-555000       | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:49 PDT | 25 Oct 23 18:49 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:49 PDT | 25 Oct 23 18:54 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:54 PDT | 25 Oct 23 18:54 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:54 PDT | 25 Oct 23 18:54 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:54 PDT | 25 Oct 23 18:55 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:55 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-555000 | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:55 PDT |
	|         | default-k8s-diff-port-555000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-343000 --memory=2200 --alsologtostderr   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:55 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.28.3          |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-343000             | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:55 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-343000                                   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:55 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-343000                  | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:55 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-343000 --memory=2200 --alsologtostderr   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:55 PDT | 25 Oct 23 18:56 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.28.3          |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-343000 sudo                              | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:56 PDT | 25 Oct 23 18:56 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-343000                                   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:56 PDT | 25 Oct 23 18:56 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-343000                                   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:56 PDT | 25 Oct 23 18:56 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-343000                                   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:56 PDT | 25 Oct 23 18:56 PDT |
	| delete  | -p newest-cni-343000                                   | newest-cni-343000            | jenkins | v1.31.2 | 25 Oct 23 18:56 PDT | 25 Oct 23 18:56 PDT |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 18:55:54
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 18:55:54.072762   83628 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:55:54.073058   83628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:54.073063   83628 out.go:309] Setting ErrFile to fd 2...
	I1025 18:55:54.073068   83628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:54.073249   83628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 18:55:54.074665   83628 out.go:303] Setting JSON to false
	I1025 18:55:54.096368   83628 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":35722,"bootTime":1698249632,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 18:55:54.096469   83628 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 18:55:54.118141   83628 out.go:177] * [newest-cni-343000] minikube v1.31.2 on Darwin 14.0
	I1025 18:55:54.161731   83628 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 18:55:54.161816   83628 notify.go:220] Checking for updates...
	I1025 18:55:54.183969   83628 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:55:54.205953   83628 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 18:55:54.248790   83628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:55:54.269912   83628 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 18:55:54.291686   83628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:55:54.313443   83628 config.go:182] Loaded profile config "newest-cni-343000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:55:54.314214   83628 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 18:55:54.372912   83628 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 18:55:54.373058   83628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:55:54.471736   83628 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 01:55:54.459346701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:55:54.515303   83628 out.go:177] * Using the docker driver based on existing profile
	I1025 18:55:54.538356   83628 start.go:298] selected driver: docker
	I1025 18:55:54.538381   83628 start.go:902] validating driver "docker" against &{Name:newest-cni-343000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-343000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:55:54.538506   83628 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:55:54.543006   83628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 18:55:54.643391   83628 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 01:55:54.632365055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 18:55:54.643656   83628 start_flags.go:945] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 18:55:54.643723   83628 cni.go:84] Creating CNI manager for ""
	I1025 18:55:54.643737   83628 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:55:54.643748   83628 start_flags.go:323] config:
	{Name:newest-cni-343000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-343000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:55:54.686113   83628 out.go:177] * Starting control plane node newest-cni-343000 in cluster newest-cni-343000
	I1025 18:55:54.707241   83628 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 18:55:54.728244   83628 out.go:177] * Pulling base image ...
	I1025 18:55:54.771132   83628 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:55:54.771161   83628 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 18:55:54.771201   83628 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 18:55:54.771213   83628 cache.go:56] Caching tarball of preloaded images
	I1025 18:55:54.771306   83628 preload.go:174] Found /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 18:55:54.771318   83628 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 18:55:54.771411   83628 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/config.json ...
	I1025 18:55:54.821685   83628 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1025 18:55:54.821707   83628 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1025 18:55:54.821729   83628 cache.go:194] Successfully downloaded all kic artifacts
	I1025 18:55:54.821778   83628 start.go:365] acquiring machines lock for newest-cni-343000: {Name:mk525e2f0aa53f8504b24dbafcf08d912d8d647f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:54.821862   83628 start.go:369] acquired machines lock for "newest-cni-343000" in 55.083µs
	I1025 18:55:54.821883   83628 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:55:54.821892   83628 fix.go:54] fixHost starting: 
	I1025 18:55:54.822124   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:55:54.873572   83628 fix.go:102] recreateIfNeeded on newest-cni-343000: state=Stopped err=<nil>
	W1025 18:55:54.873615   83628 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 18:55:54.895203   83628 out.go:177] * Restarting existing docker container for "newest-cni-343000" ...
	I1025 18:55:54.937240   83628 cli_runner.go:164] Run: docker start newest-cni-343000
	I1025 18:55:55.225147   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:55:55.282680   83628 kic.go:427] container "newest-cni-343000" state is running.
	I1025 18:55:55.283250   83628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-343000
	I1025 18:55:55.400408   83628 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/config.json ...
	I1025 18:55:55.401044   83628 machine.go:88] provisioning docker machine ...
	I1025 18:55:55.401107   83628 ubuntu.go:169] provisioning hostname "newest-cni-343000"
	I1025 18:55:55.401252   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:55.478148   83628 main.go:141] libmachine: Using SSH client type: native
	I1025 18:55:55.478532   83628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60918 <nil> <nil>}
	I1025 18:55:55.478553   83628 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-343000 && echo "newest-cni-343000" | sudo tee /etc/hostname
	I1025 18:55:55.764837   83628 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-343000
	
	I1025 18:55:55.764975   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:55.820876   83628 main.go:141] libmachine: Using SSH client type: native
	I1025 18:55:55.821184   83628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60918 <nil> <nil>}
	I1025 18:55:55.821198   83628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-343000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-343000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-343000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:55:55.950914   83628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:55:55.950935   83628 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17488-64832/.minikube CaCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17488-64832/.minikube}
	I1025 18:55:55.950953   83628 ubuntu.go:177] setting up certificates
	I1025 18:55:55.950961   83628 provision.go:83] configureAuth start
	I1025 18:55:55.951047   83628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-343000
	I1025 18:55:56.006745   83628 provision.go:138] copyHostCerts
	I1025 18:55:56.006848   83628 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem, removing ...
	I1025 18:55:56.006859   83628 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem
	I1025 18:55:56.006987   83628 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.pem (1078 bytes)
	I1025 18:55:56.007203   83628 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem, removing ...
	I1025 18:55:56.007211   83628 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem
	I1025 18:55:56.007314   83628 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/cert.pem (1123 bytes)
	I1025 18:55:56.007509   83628 exec_runner.go:144] found /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem, removing ...
	I1025 18:55:56.007515   83628 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem
	I1025 18:55:56.007581   83628 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17488-64832/.minikube/key.pem (1679 bytes)
	I1025 18:55:56.007723   83628 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem org=jenkins.newest-cni-343000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-343000]
	I1025 18:55:56.145369   83628 provision.go:172] copyRemoteCerts
	I1025 18:55:56.145426   83628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:55:56.145482   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:56.197815   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:55:56.289462   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 18:55:56.312220   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 18:55:56.335409   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1025 18:55:56.359068   83628 provision.go:86] duration metric: configureAuth took 408.069528ms
	I1025 18:55:56.359081   83628 ubuntu.go:193] setting minikube options for container-runtime
	I1025 18:55:56.359223   83628 config.go:182] Loaded profile config "newest-cni-343000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:55:56.359288   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:56.411212   83628 main.go:141] libmachine: Using SSH client type: native
	I1025 18:55:56.411519   83628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60918 <nil> <nil>}
	I1025 18:55:56.411535   83628 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:55:56.535312   83628 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 18:55:56.535334   83628 ubuntu.go:71] root file system type: overlay
	I1025 18:55:56.535449   83628 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:55:56.535565   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:56.586897   83628 main.go:141] libmachine: Using SSH client type: native
	I1025 18:55:56.587221   83628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60918 <nil> <nil>}
	I1025 18:55:56.587271   83628 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:55:56.722068   83628 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:55:56.722181   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:56.773506   83628 main.go:141] libmachine: Using SSH client type: native
	I1025 18:55:56.773805   83628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 127.0.0.1 60918 <nil> <nil>}
	I1025 18:55:56.773818   83628 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:55:56.902383   83628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:55:56.902400   83628 machine.go:91] provisioned docker machine in 1.501303619s
	I1025 18:55:56.902410   83628 start.go:300] post-start starting for "newest-cni-343000" (driver="docker")
	I1025 18:55:56.902420   83628 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:55:56.902484   83628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:55:56.902544   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:56.955476   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:55:57.045120   83628 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:55:57.049796   83628 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 18:55:57.049818   83628 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1025 18:55:57.049826   83628 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1025 18:55:57.049834   83628 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1025 18:55:57.049845   83628 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/addons for local assets ...
	I1025 18:55:57.049941   83628 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17488-64832/.minikube/files for local assets ...
	I1025 18:55:57.050097   83628 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem -> 652922.pem in /etc/ssl/certs
	I1025 18:55:57.050245   83628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:55:57.059521   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:55:57.083621   83628 start.go:303] post-start completed in 181.193525ms
	I1025 18:55:57.083725   83628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 18:55:57.083798   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:57.136368   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:55:57.224473   83628 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 18:55:57.229867   83628 fix.go:56] fixHost completed within 2.407902833s
	I1025 18:55:57.229888   83628 start.go:83] releasing machines lock for "newest-cni-343000", held for 2.40794597s
	I1025 18:55:57.229971   83628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-343000
	I1025 18:55:57.281725   83628 ssh_runner.go:195] Run: cat /version.json
	I1025 18:55:57.281756   83628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:55:57.281803   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:57.281834   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:57.338289   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:55:57.338288   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:55:57.426142   83628 ssh_runner.go:195] Run: systemctl --version
	I1025 18:55:57.536871   83628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1025 18:55:57.544691   83628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1025 18:55:57.565617   83628 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1025 18:55:57.565692   83628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 18:55:57.575510   83628 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 18:55:57.575525   83628 start.go:472] detecting cgroup driver to use...
	I1025 18:55:57.575543   83628 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:55:57.575672   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:55:57.592100   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 18:55:57.602562   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:55:57.613258   83628 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:55:57.613317   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:55:57.623970   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:55:57.634887   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:55:57.645559   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:55:57.656324   83628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:55:57.666666   83628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:55:57.677406   83628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:55:57.687803   83628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:55:57.697996   83628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:55:57.762418   83628 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:55:57.839749   83628 start.go:472] detecting cgroup driver to use...
	I1025 18:55:57.839768   83628 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1025 18:55:57.839830   83628 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:55:57.853291   83628 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1025 18:55:57.853397   83628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:55:57.866827   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:55:57.886307   83628 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:55:57.891831   83628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:55:57.902979   83628 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:55:57.949564   83628 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:55:58.076472   83628 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:55:58.171405   83628 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:55:58.171490   83628 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:55:58.189694   83628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:55:58.276780   83628 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:55:58.579112   83628 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:55:58.643008   83628 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 18:55:58.705609   83628 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:55:58.767844   83628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:55:58.830330   83628 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 18:55:58.864744   83628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:55:58.926086   83628 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 18:55:59.018241   83628 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 18:55:59.018332   83628 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 18:55:59.023587   83628 start.go:540] Will wait 60s for crictl version
	I1025 18:55:59.023664   83628 ssh_runner.go:195] Run: which crictl
	I1025 18:55:59.028541   83628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 18:55:59.077066   83628 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 18:55:59.098819   83628 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:55:59.125778   83628 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:55:59.173927   83628 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 18:55:59.174008   83628 cli_runner.go:164] Run: docker exec -t newest-cni-343000 dig +short host.docker.internal
	I1025 18:55:59.306036   83628 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1025 18:55:59.306134   83628 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1025 18:55:59.311249   83628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:55:59.323383   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:55:59.397991   83628 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 18:55:59.419695   83628 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:55:59.419831   83628 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:55:59.442292   83628 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:55:59.442314   83628 docker.go:623] Images already preloaded, skipping extraction
	I1025 18:55:59.442394   83628 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:55:59.463437   83628 docker.go:693] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:55:59.463467   83628 cache_images.go:84] Images are preloaded, skipping loading
	I1025 18:55:59.463552   83628 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:55:59.515566   83628 cni.go:84] Creating CNI manager for ""
	I1025 18:55:59.515583   83628 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:55:59.515599   83628 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1025 18:55:59.515617   83628 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-343000 NodeName:newest-cni-343000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 18:55:59.515762   83628 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-343000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:55:59.515831   83628 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-343000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:newest-cni-343000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 18:55:59.515890   83628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 18:55:59.525585   83628 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:55:59.525655   83628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 18:55:59.535029   83628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I1025 18:55:59.552225   83628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:55:59.569852   83628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1025 18:55:59.587449   83628 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 18:55:59.592521   83628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:55:59.604295   83628 certs.go:56] Setting up /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000 for IP: 192.168.76.2
	I1025 18:55:59.604315   83628 certs.go:190] acquiring lock for shared ca certs: {Name:mk3b233645537eeaa35f16b83a4ace6d87ff2e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:55:59.604472   83628 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key
	I1025 18:55:59.604517   83628 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key
	I1025 18:55:59.604606   83628 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/client.key
	I1025 18:55:59.604692   83628 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/apiserver.key.31bdca25
	I1025 18:55:59.604741   83628 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/proxy-client.key
	I1025 18:55:59.604941   83628 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem (1338 bytes)
	W1025 18:55:59.604975   83628 certs.go:433] ignoring /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292_empty.pem, impossibly tiny 0 bytes
	I1025 18:55:59.604984   83628 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 18:55:59.605018   83628 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/ca.pem (1078 bytes)
	I1025 18:55:59.605050   83628 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:55:59.605081   83628 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/certs/key.pem (1679 bytes)
	I1025 18:55:59.605149   83628 certs.go:437] found cert: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem (1708 bytes)
	I1025 18:55:59.605685   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 18:55:59.629985   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 18:55:59.654097   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 18:55:59.677589   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/newest-cni-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 18:55:59.701702   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:55:59.725542   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 18:55:59.749023   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:55:59.772833   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:55:59.796924   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/ssl/certs/652922.pem --> /usr/share/ca-certificates/652922.pem (1708 bytes)
	I1025 18:55:59.820176   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:55:59.843064   83628 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17488-64832/.minikube/certs/65292.pem --> /usr/share/ca-certificates/65292.pem (1338 bytes)
	I1025 18:55:59.866558   83628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 18:55:59.883968   83628 ssh_runner.go:195] Run: openssl version
	I1025 18:55:59.889902   83628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65292.pem && ln -fs /usr/share/ca-certificates/65292.pem /etc/ssl/certs/65292.pem"
	I1025 18:55:59.900635   83628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65292.pem
	I1025 18:55:59.905322   83628 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:44 /usr/share/ca-certificates/65292.pem
	I1025 18:55:59.905367   83628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65292.pem
	I1025 18:55:59.912594   83628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/65292.pem /etc/ssl/certs/51391683.0"
	I1025 18:55:59.922671   83628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/652922.pem && ln -fs /usr/share/ca-certificates/652922.pem /etc/ssl/certs/652922.pem"
	I1025 18:55:59.933533   83628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/652922.pem
	I1025 18:55:59.938562   83628 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:44 /usr/share/ca-certificates/652922.pem
	I1025 18:55:59.938631   83628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/652922.pem
	I1025 18:55:59.946345   83628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/652922.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:55:59.957161   83628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:55:59.969095   83628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:55:59.974470   83628 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:39 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:55:59.974538   83628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:55:59.982680   83628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:55:59.993297   83628 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 18:55:59.999062   83628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 18:56:00.006974   83628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 18:56:00.014625   83628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 18:56:00.023528   83628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 18:56:00.031235   83628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 18:56:00.039064   83628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 18:56:00.046330   83628 kubeadm.go:404] StartCluster: {Name:newest-cni-343000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-343000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:56:00.046516   83628 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:56:00.067500   83628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 18:56:00.077603   83628 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1025 18:56:00.077627   83628 kubeadm.go:636] restartCluster start
	I1025 18:56:00.077706   83628 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 18:56:00.087287   83628 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:00.087360   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:56:00.141247   83628 kubeconfig.go:135] verify returned: extract IP: "newest-cni-343000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:56:00.141401   83628 kubeconfig.go:146] "newest-cni-343000" context is missing from /Users/jenkins/minikube-integration/17488-64832/kubeconfig - will repair!
	I1025 18:56:00.141719   83628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/kubeconfig: {Name:mka2fd80159d21a18312620daab0f942465327a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:56:00.143289   83628 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 18:56:00.153344   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:00.153399   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:00.164003   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:00.164012   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:00.164059   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:00.174525   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:00.675066   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:00.675189   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:00.687491   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:01.175159   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:01.175376   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:01.188852   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:01.675474   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:01.675646   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:01.688324   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:02.174891   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:02.175088   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:02.188385   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:02.676169   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:02.676284   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:02.689171   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:03.176203   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:03.176458   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:03.189766   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:03.674857   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:03.674979   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:03.686811   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:04.174790   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:04.174900   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:04.187666   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:04.675371   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:04.675511   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:04.688459   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:05.174968   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:05.175085   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:05.187491   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:05.674886   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:05.674982   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:05.687406   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:06.174838   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:06.174960   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:06.186729   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:06.675115   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:06.675192   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:06.687017   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:07.176926   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:07.177046   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:07.189845   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:07.675528   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:07.675729   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:07.688506   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:08.174882   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:08.174980   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:08.187219   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:08.676335   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:08.676394   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:08.687635   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:09.176419   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:09.176559   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:09.189552   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:09.676430   83628 api_server.go:166] Checking apiserver status ...
	I1025 18:56:09.676532   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 18:56:09.689633   83628 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:10.154887   83628 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1025 18:56:10.155034   83628 kubeadm.go:1128] stopping kube-system containers ...
	I1025 18:56:10.155152   83628 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:56:10.179525   83628 docker.go:464] Stopping containers: [0fb1176339f8 7a209347c89e 48ca9d47f88b 8f39368b7272 8f57982e96e1 2dda95220d25 24c40b23c57a 8583fe5b9f37 d4d447fb063a 4c0c70614002 3128acab8e29 0b0f38caf416 6ad0e95c4260 1bf6f80b8b9a]
	I1025 18:56:10.179609   83628 ssh_runner.go:195] Run: docker stop 0fb1176339f8 7a209347c89e 48ca9d47f88b 8f39368b7272 8f57982e96e1 2dda95220d25 24c40b23c57a 8583fe5b9f37 d4d447fb063a 4c0c70614002 3128acab8e29 0b0f38caf416 6ad0e95c4260 1bf6f80b8b9a
	I1025 18:56:10.201404   83628 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 18:56:10.214357   83628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:56:10.223977   83628 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct 26 01:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 26 01:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct 26 01:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Oct 26 01:55 /etc/kubernetes/scheduler.conf
	
	I1025 18:56:10.224041   83628 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 18:56:10.233322   83628 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 18:56:10.242555   83628 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 18:56:10.251667   83628 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:10.251725   83628 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 18:56:10.260771   83628 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 18:56:10.270252   83628 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:56:10.270320   83628 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 18:56:10.279515   83628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:56:10.288945   83628 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1025 18:56:10.288958   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:56:10.340741   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:56:10.889370   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:56:11.026635   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:56:11.086230   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:56:11.180272   83628 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:56:11.180408   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:56:11.251151   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:56:11.769850   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:56:12.271379   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:56:12.358230   83628 api_server.go:72] duration metric: took 1.177923123s to wait for apiserver process to appear ...
	I1025 18:56:12.358247   83628 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:56:12.358273   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:12.359959   83628 api_server.go:269] stopped: https://127.0.0.1:60922/healthz: Get "https://127.0.0.1:60922/healthz": EOF
	I1025 18:56:12.359983   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:12.361083   83628 api_server.go:269] stopped: https://127.0.0.1:60922/healthz: Get "https://127.0.0.1:60922/healthz": EOF
	I1025 18:56:12.861296   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:14.751873   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 18:56:14.751910   83628 api_server.go:103] status: https://127.0.0.1:60922/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 18:56:14.751922   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:14.850117   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:56:14.850146   83628 api_server.go:103] status: https://127.0.0.1:60922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:56:14.861299   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:14.869466   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:56:14.869491   83628 api_server.go:103] status: https://127.0.0.1:60922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:56:15.362007   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:15.367188   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:56:15.367204   83628 api_server.go:103] status: https://127.0.0.1:60922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:56:15.861775   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:15.870218   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:56:15.870258   83628 api_server.go:103] status: https://127.0.0.1:60922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:56:16.361388   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:16.370915   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 18:56:16.370947   83628 api_server.go:103] status: https://127.0.0.1:60922/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 18:56:16.861469   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:16.868545   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 200:
	ok
	I1025 18:56:16.877392   83628 api_server.go:141] control plane version: v1.28.3
	I1025 18:56:16.877408   83628 api_server.go:131] duration metric: took 4.519020322s to wait for apiserver health ...
	I1025 18:56:16.877415   83628 cni.go:84] Creating CNI manager for ""
	I1025 18:56:16.877425   83628 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:56:16.899903   83628 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 18:56:16.921823   83628 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 18:56:16.932679   83628 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1025 18:56:16.950030   83628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 18:56:16.958679   83628 system_pods.go:59] 8 kube-system pods found
	I1025 18:56:16.958697   83628 system_pods.go:61] "coredns-5dd5756b68-jl7lg" [bd867f5e-4a47-4512-ba89-96b32dbfe9a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:56:16.958716   83628 system_pods.go:61] "etcd-newest-cni-343000" [4f259ab6-2d46-4eed-ae0f-6f97154458e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 18:56:16.958728   83628 system_pods.go:61] "kube-apiserver-newest-cni-343000" [f323d009-d5c3-4f20-a607-f9cff2b446b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 18:56:16.958739   83628 system_pods.go:61] "kube-controller-manager-newest-cni-343000" [9c7c0ea2-932b-42b6-b78e-c030b0509dae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 18:56:16.958747   83628 system_pods.go:61] "kube-proxy-jbcmw" [f8998e55-1244-46d7-959c-7c635e823a81] Running
	I1025 18:56:16.958752   83628 system_pods.go:61] "kube-scheduler-newest-cni-343000" [7c5468d6-8f17-48ee-9d9d-af81e174dd04] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 18:56:16.958758   83628 system_pods.go:61] "metrics-server-57f55c9bc5-qh9cv" [8c87c66f-3d51-4387-a715-57e7f065731b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 18:56:16.958763   83628 system_pods.go:61] "storage-provisioner" [690b9c44-928a-466b-8dd2-09177d72006b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 18:56:16.958768   83628 system_pods.go:74] duration metric: took 8.725743ms to wait for pod list to return data ...
	I1025 18:56:16.958776   83628 node_conditions.go:102] verifying NodePressure condition ...
	I1025 18:56:16.962047   83628 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:56:16.962062   83628 node_conditions.go:123] node cpu capacity is 12
	I1025 18:56:16.962073   83628 node_conditions.go:105] duration metric: took 3.293635ms to run NodePressure ...
	I1025 18:56:16.962085   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:56:17.146763   83628 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 18:56:17.156079   83628 ops.go:34] apiserver oom_adj: -16
	I1025 18:56:17.156096   83628 kubeadm.go:640] restartCluster took 17.077946084s
	I1025 18:56:17.156104   83628 kubeadm.go:406] StartCluster complete in 17.109266639s
	I1025 18:56:17.156119   83628 settings.go:142] acquiring lock: {Name:mkca0a8fe84aa865309571104a1d51551b90d38c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:56:17.156198   83628 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 18:56:17.156798   83628 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/kubeconfig: {Name:mka2fd80159d21a18312620daab0f942465327a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:56:17.157062   83628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 18:56:17.157080   83628 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 18:56:17.157133   83628 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-343000"
	I1025 18:56:17.157156   83628 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-343000"
	W1025 18:56:17.157164   83628 addons.go:240] addon storage-provisioner should already be in state true
	I1025 18:56:17.157193   83628 addons.go:69] Setting default-storageclass=true in profile "newest-cni-343000"
	I1025 18:56:17.157207   83628 addons.go:69] Setting dashboard=true in profile "newest-cni-343000"
	I1025 18:56:17.157215   83628 addons.go:69] Setting metrics-server=true in profile "newest-cni-343000"
	I1025 18:56:17.157223   83628 addons.go:231] Setting addon metrics-server=true in "newest-cni-343000"
	W1025 18:56:17.157228   83628 addons.go:240] addon metrics-server should already be in state true
	I1025 18:56:17.157226   83628 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-343000"
	I1025 18:56:17.157209   83628 host.go:66] Checking if "newest-cni-343000" exists ...
	I1025 18:56:17.157250   83628 config.go:182] Loaded profile config "newest-cni-343000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:56:17.157255   83628 host.go:66] Checking if "newest-cni-343000" exists ...
	I1025 18:56:17.157229   83628 addons.go:231] Setting addon dashboard=true in "newest-cni-343000"
	W1025 18:56:17.157285   83628 addons.go:240] addon dashboard should already be in state true
	I1025 18:56:17.157353   83628 host.go:66] Checking if "newest-cni-343000" exists ...
	I1025 18:56:17.157494   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:56:17.157625   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:56:17.158417   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:56:17.158741   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:56:17.166844   83628 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-343000" context rescaled to 1 replicas
	I1025 18:56:17.166888   83628 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:56:17.204453   83628 out.go:177] * Verifying Kubernetes components...
	I1025 18:56:17.264394   83628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:56:17.265000   83628 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1025 18:56:17.297046   83628 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 18:56:17.276641   83628 addons.go:231] Setting addon default-storageclass=true in "newest-cni-343000"
	I1025 18:56:17.291495   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:56:17.318100   83628 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	W1025 18:56:17.318109   83628 addons.go:240] addon default-storageclass should already be in state true
	I1025 18:56:17.360012   83628 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:56:17.360108   83628 host.go:66] Checking if "newest-cni-343000" exists ...
	I1025 18:56:17.397082   83628 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1025 18:56:17.418276   83628 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 18:56:17.418881   83628 cli_runner.go:164] Run: docker container inspect newest-cni-343000 --format={{.State.Status}}
	I1025 18:56:17.455018   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 18:56:17.492249   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 18:56:17.492275   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 18:56:17.455165   83628 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:56:17.492322   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 18:56:17.492379   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:56:17.492383   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:56:17.492456   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:56:17.495512   83628 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:56:17.495707   83628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:56:17.517349   83628 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 18:56:17.517371   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 18:56:17.517510   83628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-343000
	I1025 18:56:17.520068   83628 api_server.go:72] duration metric: took 353.129672ms to wait for apiserver process to appear ...
	I1025 18:56:17.520093   83628 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:56:17.520117   83628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60922/healthz ...
	I1025 18:56:17.527729   83628 api_server.go:279] https://127.0.0.1:60922/healthz returned 200:
	ok
	I1025 18:56:17.530665   83628 api_server.go:141] control plane version: v1.28.3
	I1025 18:56:17.530688   83628 api_server.go:131] duration metric: took 10.586938ms to wait for apiserver health ...
	I1025 18:56:17.530702   83628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 18:56:17.541232   83628 system_pods.go:59] 8 kube-system pods found
	I1025 18:56:17.541262   83628 system_pods.go:61] "coredns-5dd5756b68-jl7lg" [bd867f5e-4a47-4512-ba89-96b32dbfe9a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 18:56:17.541273   83628 system_pods.go:61] "etcd-newest-cni-343000" [4f259ab6-2d46-4eed-ae0f-6f97154458e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 18:56:17.541285   83628 system_pods.go:61] "kube-apiserver-newest-cni-343000" [f323d009-d5c3-4f20-a607-f9cff2b446b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 18:56:17.541293   83628 system_pods.go:61] "kube-controller-manager-newest-cni-343000" [9c7c0ea2-932b-42b6-b78e-c030b0509dae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 18:56:17.541301   83628 system_pods.go:61] "kube-proxy-jbcmw" [f8998e55-1244-46d7-959c-7c635e823a81] Running
	I1025 18:56:17.541309   83628 system_pods.go:61] "kube-scheduler-newest-cni-343000" [7c5468d6-8f17-48ee-9d9d-af81e174dd04] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 18:56:17.541316   83628 system_pods.go:61] "metrics-server-57f55c9bc5-qh9cv" [8c87c66f-3d51-4387-a715-57e7f065731b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 18:56:17.541323   83628 system_pods.go:61] "storage-provisioner" [690b9c44-928a-466b-8dd2-09177d72006b] Running
	I1025 18:56:17.541329   83628 system_pods.go:74] duration metric: took 10.61845ms to wait for pod list to return data ...
	I1025 18:56:17.541336   83628 default_sa.go:34] waiting for default service account to be created ...
	I1025 18:56:17.546179   83628 default_sa.go:45] found service account: "default"
	I1025 18:56:17.546207   83628 default_sa.go:55] duration metric: took 4.852014ms for default service account to be created ...
	I1025 18:56:17.546221   83628 kubeadm.go:581] duration metric: took 379.288312ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1025 18:56:17.546244   83628 node_conditions.go:102] verifying NodePressure condition ...
	I1025 18:56:17.551342   83628 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I1025 18:56:17.551358   83628 node_conditions.go:123] node cpu capacity is 12
	I1025 18:56:17.551368   83628 node_conditions.go:105] duration metric: took 5.11922ms to run NodePressure ...
	I1025 18:56:17.551379   83628 start.go:228] waiting for startup goroutines ...
	I1025 18:56:17.572004   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:56:17.573779   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:56:17.573818   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:56:17.595685   83628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60918 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/newest-cni-343000/id_rsa Username:docker}
	I1025 18:56:17.680951   83628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 18:56:17.680961   83628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:56:17.680964   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1025 18:56:17.681151   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 18:56:17.681167   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 18:56:17.701617   83628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 18:56:17.702542   83628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 18:56:17.702556   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 18:56:17.702650   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 18:56:17.702663   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 18:56:17.755506   83628 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 18:56:17.755522   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 18:56:17.755623   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 18:56:17.755634   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 18:56:17.778856   83628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 18:56:17.779265   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 18:56:17.779280   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 18:56:17.863826   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 18:56:17.863846   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 18:56:17.956157   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 18:56:17.956172   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 18:56:17.980204   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 18:56:17.980225   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 18:56:18.059494   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 18:56:18.059511   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 18:56:18.078787   83628 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 18:56:18.078801   83628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 18:56:18.153309   83628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 18:56:19.055155   83628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.374110435s)
	I1025 18:56:19.055169   83628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.353496248s)
	I1025 18:56:19.167938   83628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.389008032s)
	I1025 18:56:19.167962   83628 addons.go:467] Verifying addon metrics-server=true in "newest-cni-343000"
	I1025 18:56:19.454898   83628 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.301521131s)
	I1025 18:56:19.478933   83628 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-343000 addons enable metrics-server	
	
	
	I1025 18:56:19.540100   83628 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1025 18:56:19.582739   83628 addons.go:502] enable addons completed in 2.42559116s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1025 18:56:19.582772   83628 start.go:233] waiting for cluster config update ...
	I1025 18:56:19.582791   83628 start.go:242] writing updated cluster config ...
	I1025 18:56:19.583253   83628 ssh_runner.go:195] Run: rm -f paused
	I1025 18:56:19.623536   83628 start.go:600] kubectl: 1.27.2, cluster: 1.28.3 (minor skew: 1)
	I1025 18:56:19.644979   83628 out.go:177] * Done! kubectl is now configured to use "newest-cni-343000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.871242467Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.910518800Z" level=info msg="Loading containers: done."
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.919080909Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.919142665Z" level=info msg="Daemon has completed initialization"
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.951066840Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 26 01:39:17 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:17.951104585Z" level=info msg="API listen on [::]:2376"
	Oct 26 01:39:17 old-k8s-version-479000 systemd[1]: Started Docker Application Container Engine.
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: Stopping Docker Application Container Engine...
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:26.083136157Z" level=info msg="Processing signal 'terminated'"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:26.084131940Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:26.084231689Z" level=info msg="Daemon shutdown complete"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[721]: time="2023-10-26T01:39:26.084420982Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: docker.service: Deactivated successfully.
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: Stopped Docker Application Container Engine.
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: Starting Docker Application Container Engine...
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.154547096Z" level=info msg="Starting up"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.167667501Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.424134561Z" level=info msg="Loading containers: start."
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.517825027Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.557588670Z" level=info msg="Loading containers: done."
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.586622144Z" level=info msg="Docker daemon" commit=1a79695 graphdriver=overlay2 version=24.0.6
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.586688417Z" level=info msg="Daemon has completed initialization"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.620523760Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 26 01:39:26 old-k8s-version-479000 dockerd[951]: time="2023-10-26T01:39:26.620527338Z" level=info msg="API listen on [::]:2376"
	Oct 26 01:39:26 old-k8s-version-479000 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-10-26T02:02:56Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  02:02:56 up  1:25,  0 users,  load average: 0.03, 0.34, 0.67
	Linux old-k8s-version-479000 6.4.16-linuxkit #1 SMP PREEMPT_DYNAMIC Tue Oct 10 20:42:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kubelet <==
	* Oct 26 02:02:55 old-k8s-version-479000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 26 02:02:55 old-k8s-version-479000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1377.
	Oct 26 02:02:55 old-k8s-version-479000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 26 02:02:55 old-k8s-version-479000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 26 02:02:55 old-k8s-version-479000 kubelet[42531]: I1026 02:02:55.907401   42531 server.go:410] Version: v1.16.0
	Oct 26 02:02:55 old-k8s-version-479000 kubelet[42531]: I1026 02:02:55.909386   42531 plugins.go:100] No cloud provider specified.
	Oct 26 02:02:55 old-k8s-version-479000 kubelet[42531]: I1026 02:02:55.909560   42531 server.go:773] Client rotation is on, will bootstrap in background
	Oct 26 02:02:55 old-k8s-version-479000 kubelet[42531]: I1026 02:02:55.911300   42531 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 26 02:02:55 old-k8s-version-479000 kubelet[42531]: W1026 02:02:55.911902   42531 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Oct 26 02:02:55 old-k8s-version-479000 kubelet[42531]: W1026 02:02:55.911968   42531 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Oct 26 02:02:55 old-k8s-version-479000 kubelet[42531]: F1026 02:02:55.911993   42531 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Oct 26 02:02:55 old-k8s-version-479000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 26 02:02:55 old-k8s-version-479000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 26 02:02:56 old-k8s-version-479000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1378.
	Oct 26 02:02:56 old-k8s-version-479000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 26 02:02:56 old-k8s-version-479000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 26 02:02:56 old-k8s-version-479000 kubelet[42651]: I1026 02:02:56.663269   42651 server.go:410] Version: v1.16.0
	Oct 26 02:02:56 old-k8s-version-479000 kubelet[42651]: I1026 02:02:56.663632   42651 plugins.go:100] No cloud provider specified.
	Oct 26 02:02:56 old-k8s-version-479000 kubelet[42651]: I1026 02:02:56.663679   42651 server.go:773] Client rotation is on, will bootstrap in background
	Oct 26 02:02:56 old-k8s-version-479000 kubelet[42651]: I1026 02:02:56.665792   42651 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 26 02:02:56 old-k8s-version-479000 kubelet[42651]: W1026 02:02:56.666474   42651 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Oct 26 02:02:56 old-k8s-version-479000 kubelet[42651]: W1026 02:02:56.666540   42651 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Oct 26 02:02:56 old-k8s-version-479000 kubelet[42651]: F1026 02:02:56.667339   42651 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Oct 26 02:02:56 old-k8s-version-479000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 26 02:02:56 old-k8s-version-479000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 19:02:56.528004   83974 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-479000 -n old-k8s-version-479000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-479000 -n old-k8s-version-479000: exit status 2 (399.269771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-479000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (373.15s)

                                                
                                    

Test pass (280/321)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.26
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.32
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.33
16 TestDownloadOnly/DeleteAll 0.65
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.38
20 TestOffline 41.2
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
25 TestAddons/Setup 162.09
29 TestAddons/parallel/InspektorGadget 11.17
30 TestAddons/parallel/MetricsServer 5.86
31 TestAddons/parallel/HelmTiller 10.82
33 TestAddons/parallel/CSI 87.07
34 TestAddons/parallel/Headlamp 14.5
35 TestAddons/parallel/CloudSpanner 5.76
36 TestAddons/parallel/LocalPath 54.29
37 TestAddons/parallel/NvidiaDevicePlugin 5.65
40 TestAddons/serial/GCPAuth/Namespaces 0.1
41 TestAddons/StoppedEnableDisable 11.91
42 TestCertOptions 26.49
43 TestCertExpiration 233.29
44 TestDockerFlags 27.11
45 TestForceSystemdFlag 29.76
46 TestForceSystemdEnv 28.84
49 TestHyperKitDriverInstallOrUpdate 6.28
52 TestErrorSpam/setup 22.12
53 TestErrorSpam/start 2.07
54 TestErrorSpam/status 1.23
55 TestErrorSpam/pause 1.79
56 TestErrorSpam/unpause 1.81
57 TestErrorSpam/stop 11.49
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 38.32
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 40.17
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.06
68 TestFunctional/serial/CacheCmd/cache/add_remote 5.11
69 TestFunctional/serial/CacheCmd/cache/add_local 1.8
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.39
74 TestFunctional/serial/CacheCmd/cache/delete 0.17
77 TestFunctional/serial/ExtraConfig 39.26
78 TestFunctional/serial/ComponentHealth 0.06
79 TestFunctional/serial/LogsCmd 3.23
80 TestFunctional/serial/LogsFileCmd 3.22
81 TestFunctional/serial/InvalidService 5.33
83 TestFunctional/parallel/ConfigCmd 0.51
84 TestFunctional/parallel/DashboardCmd 20.26
85 TestFunctional/parallel/DryRun 1.48
86 TestFunctional/parallel/InternationalLanguage 0.7
87 TestFunctional/parallel/StatusCmd 1.23
92 TestFunctional/parallel/AddonsCmd 0.27
93 TestFunctional/parallel/PersistentVolumeClaim 28.36
95 TestFunctional/parallel/SSHCmd 0.78
96 TestFunctional/parallel/CpCmd 1.99
97 TestFunctional/parallel/MySQL 41.63
98 TestFunctional/parallel/FileSync 0.47
99 TestFunctional/parallel/CertSync 2.74
103 TestFunctional/parallel/NodeLabels 0.09
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
107 TestFunctional/parallel/License 0.48
108 TestFunctional/parallel/Version/short 0.11
109 TestFunctional/parallel/Version/components 1.27
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
114 TestFunctional/parallel/ImageCommands/ImageBuild 3.71
115 TestFunctional/parallel/ImageCommands/Setup 2.94
116 TestFunctional/parallel/DockerEnv/bash 2.16
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.3
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.61
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.51
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.09
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.92
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.08
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.93
127 TestFunctional/parallel/ServiceCmd/DeployApp 19.17
128 TestFunctional/parallel/ServiceCmd/List 0.43
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
130 TestFunctional/parallel/ServiceCmd/HTTPS 15.02
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.2
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
142 TestFunctional/parallel/ServiceCmd/Format 15
143 TestFunctional/parallel/ServiceCmd/URL 15
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
145 TestFunctional/parallel/ProfileCmd/profile_list 0.48
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
147 TestFunctional/parallel/MountCmd/any-port 7.79
148 TestFunctional/parallel/MountCmd/specific-port 2.31
149 TestFunctional/parallel/MountCmd/VerifyCleanup 3.05
150 TestFunctional/delete_addon-resizer_images 0.14
151 TestFunctional/delete_my-image_image 0.05
152 TestFunctional/delete_minikube_cached_images 0.06
156 TestImageBuild/serial/Setup 22.08
157 TestImageBuild/serial/NormalBuild 1.62
158 TestImageBuild/serial/BuildWithBuildArg 0.97
159 TestImageBuild/serial/BuildWithDockerIgnore 0.76
160 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.76
170 TestJSONOutput/start/Command 36.86
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.62
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.62
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 10.94
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.78
195 TestKicCustomNetwork/create_custom_network 24.85
196 TestKicCustomNetwork/use_default_bridge_network 24.26
197 TestKicExistingNetwork 24.55
198 TestKicCustomSubnet 24.37
199 TestKicStaticIP 24.94
200 TestMainNoArgs 0.08
201 TestMinikubeProfile 51.3
204 TestMountStart/serial/StartWithMountFirst 7.42
205 TestMountStart/serial/VerifyMountFirst 0.38
206 TestMountStart/serial/StartWithMountSecond 7.44
207 TestMountStart/serial/VerifyMountSecond 0.38
208 TestMountStart/serial/DeleteFirst 2.08
209 TestMountStart/serial/VerifyMountPostDelete 0.38
210 TestMountStart/serial/Stop 1.57
211 TestMountStart/serial/RestartStopped 8.46
212 TestMountStart/serial/VerifyMountPostStop 0.38
215 TestMultiNode/serial/FreshStart2Nodes 51.31
218 TestMultiNode/serial/AddNode 15.09
219 TestMultiNode/serial/ProfileList 0.47
220 TestMultiNode/serial/CopyFile 14.18
221 TestMultiNode/serial/StopNode 2.95
222 TestMultiNode/serial/StartAfterStop 13.64
223 TestMultiNode/serial/RestartKeepsNodes 105.12
224 TestMultiNode/serial/DeleteNode 6.03
225 TestMultiNode/serial/StopMultiNode 12.74
226 TestMultiNode/serial/RestartMultiNode 57.75
227 TestMultiNode/serial/ValidateNameConflict 27.01
231 TestPreload 144.57
233 TestScheduledStopUnix 95.9
234 TestSkaffold 121.65
236 TestInsufficientStorage 10.84
252 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.41
253 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.44
254 TestStoppedBinaryUpgrade/Setup 0.85
256 TestStoppedBinaryUpgrade/MinikubeLogs 3.46
258 TestPause/serial/Start 75.17
259 TestPause/serial/SecondStartNoReconfiguration 37.25
260 TestPause/serial/Pause 0.7
261 TestPause/serial/VerifyStatus 0.39
262 TestPause/serial/Unpause 0.71
263 TestPause/serial/PauseAgain 0.89
264 TestPause/serial/DeletePaused 2.52
265 TestPause/serial/VerifyDeletedResources 0.59
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.43
275 TestNoKubernetes/serial/StartWithK8s 24.29
276 TestNoKubernetes/serial/StartWithStopK8s 17.88
277 TestNoKubernetes/serial/Start 6.47
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
279 TestNoKubernetes/serial/ProfileList 34.74
280 TestNoKubernetes/serial/Stop 1.56
281 TestNoKubernetes/serial/StartNoArgs 7.42
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
283 TestNetworkPlugins/group/auto/Start 38.81
284 TestNetworkPlugins/group/auto/KubeletFlags 0.39
285 TestNetworkPlugins/group/auto/NetCatPod 11.38
286 TestNetworkPlugins/group/auto/DNS 0.14
287 TestNetworkPlugins/group/auto/Localhost 0.12
288 TestNetworkPlugins/group/auto/HairPin 0.12
289 TestNetworkPlugins/group/kindnet/Start 52.03
290 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
291 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
292 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
293 TestNetworkPlugins/group/kindnet/DNS 0.14
294 TestNetworkPlugins/group/kindnet/Localhost 0.13
295 TestNetworkPlugins/group/kindnet/HairPin 0.12
296 TestNetworkPlugins/group/calico/Start 76.85
297 TestNetworkPlugins/group/custom-flannel/Start 53.81
298 TestNetworkPlugins/group/calico/ControllerPod 5.03
299 TestNetworkPlugins/group/calico/KubeletFlags 0.44
300 TestNetworkPlugins/group/calico/NetCatPod 12.31
301 TestNetworkPlugins/group/calico/DNS 0.17
302 TestNetworkPlugins/group/calico/Localhost 0.17
303 TestNetworkPlugins/group/calico/HairPin 0.14
304 TestNetworkPlugins/group/false/Start 39.53
305 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
306 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.32
307 TestNetworkPlugins/group/custom-flannel/DNS 0.16
308 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
309 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
310 TestNetworkPlugins/group/false/KubeletFlags 0.43
311 TestNetworkPlugins/group/false/NetCatPod 13.35
312 TestNetworkPlugins/group/enable-default-cni/Start 38.45
313 TestNetworkPlugins/group/false/DNS 0.14
314 TestNetworkPlugins/group/false/Localhost 0.14
315 TestNetworkPlugins/group/false/HairPin 0.13
316 TestNetworkPlugins/group/flannel/Start 38.17
317 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
318 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.51
319 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
320 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
321 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
322 TestNetworkPlugins/group/flannel/ControllerPod 11.02
323 TestNetworkPlugins/group/bridge/Start 76.89
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.49
325 TestNetworkPlugins/group/flannel/NetCatPod 12.36
326 TestNetworkPlugins/group/flannel/DNS 0.14
327 TestNetworkPlugins/group/flannel/Localhost 0.11
328 TestNetworkPlugins/group/flannel/HairPin 0.14
329 TestNetworkPlugins/group/kubenet/Start 74.71
330 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
331 TestNetworkPlugins/group/bridge/NetCatPod 10.28
332 TestNetworkPlugins/group/bridge/DNS 0.14
333 TestNetworkPlugins/group/bridge/Localhost 0.12
334 TestNetworkPlugins/group/bridge/HairPin 0.13
337 TestNetworkPlugins/group/kubenet/KubeletFlags 0.47
338 TestNetworkPlugins/group/kubenet/NetCatPod 11.34
339 TestNetworkPlugins/group/kubenet/DNS 0.15
340 TestNetworkPlugins/group/kubenet/Localhost 0.14
341 TestNetworkPlugins/group/kubenet/HairPin 0.16
343 TestStartStop/group/no-preload/serial/FirstStart 74.46
344 TestStartStop/group/no-preload/serial/DeployApp 9.33
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
346 TestStartStop/group/no-preload/serial/Stop 10.94
347 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.44
348 TestStartStop/group/no-preload/serial/SecondStart 311.45
351 TestStartStop/group/old-k8s-version/serial/Stop 1.57
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.43
354 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 21.02
355 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
356 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.44
357 TestStartStop/group/no-preload/serial/Pause 3.45
359 TestStartStop/group/embed-certs/serial/FirstStart 37.95
360 TestStartStop/group/embed-certs/serial/DeployApp 9.34
361 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.21
362 TestStartStop/group/embed-certs/serial/Stop 11
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.43
364 TestStartStop/group/embed-certs/serial/SecondStart 313.01
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.02
366 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
367 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.43
368 TestStartStop/group/embed-certs/serial/Pause 3.53
371 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 76.89
372 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.32
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.19
374 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.92
375 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.44
376 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 311.09
377 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 20.02
378 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
379 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.44
380 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.46
382 TestStartStop/group/newest-cni/serial/FirstStart 36.54
383 TestStartStop/group/newest-cni/serial/DeployApp 0
384 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
385 TestStartStop/group/newest-cni/serial/Stop 11.07
386 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.44
387 TestStartStop/group/newest-cni/serial/SecondStart 26.12
388 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.52
391 TestStartStop/group/newest-cni/serial/Pause 3.34
x
+
TestDownloadOnly/v1.16.0/json-events (12.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-018000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-018000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (12.262858481s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-018000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-018000: exit status 85 (318.367683ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-018000 | jenkins | v1.31.2 | 25 Oct 23 17:38 PDT |          |
	|         | -p download-only-018000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 17:38:43
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 17:38:43.011096   65294 out.go:296] Setting OutFile to fd 1 ...
	I1025 17:38:43.011316   65294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:38:43.011321   65294 out.go:309] Setting ErrFile to fd 2...
	I1025 17:38:43.011326   65294 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:38:43.011511   65294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	W1025 17:38:43.011612   65294 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17488-64832/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17488-64832/.minikube/config/config.json: no such file or directory
	I1025 17:38:43.013285   65294 out.go:303] Setting JSON to true
	I1025 17:38:43.035144   65294 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":31091,"bootTime":1698249632,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 17:38:43.035246   65294 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 17:38:43.057937   65294 out.go:97] [download-only-018000] minikube v1.31.2 on Darwin 14.0
	I1025 17:38:43.079571   65294 out.go:169] MINIKUBE_LOCATION=17488
	I1025 17:38:43.058163   65294 notify.go:220] Checking for updates...
	W1025 17:38:43.058201   65294 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 17:38:43.126463   65294 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:38:43.148402   65294 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 17:38:43.169638   65294 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:38:43.191473   65294 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	W1025 17:38:43.235368   65294 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 17:38:43.235849   65294 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 17:38:43.296499   65294 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 17:38:43.296624   65294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:38:43.399571   65294 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2023-10-26 00:38:43.385251408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:38:43.421103   65294 out.go:97] Using the docker driver based on user configuration
	I1025 17:38:43.421132   65294 start.go:298] selected driver: docker
	I1025 17:38:43.421142   65294 start.go:902] validating driver "docker" against <nil>
	I1025 17:38:43.421352   65294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:38:43.524354   65294 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2023-10-26 00:38:43.513390067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:38:43.524526   65294 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 17:38:43.527471   65294 start_flags.go:386] Using suggested 5891MB memory alloc based on sys=32768MB, container=5939MB
	I1025 17:38:43.527649   65294 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 17:38:43.549052   65294 out.go:169] Using Docker Desktop driver with root privileges
	I1025 17:38:43.570205   65294 cni.go:84] Creating CNI manager for ""
	I1025 17:38:43.570253   65294 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 17:38:43.570271   65294 start_flags.go:323] config:
	{Name:download-only-018000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-018000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:38:43.591870   65294 out.go:97] Starting control plane node download-only-018000 in cluster download-only-018000
	I1025 17:38:43.591933   65294 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 17:38:43.614038   65294 out.go:97] Pulling base image ...
	I1025 17:38:43.614145   65294 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 17:38:43.614237   65294 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 17:38:43.667602   65294 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 17:38:43.668069   65294 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1025 17:38:43.668197   65294 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 17:38:43.669434   65294 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 17:38:43.669457   65294 cache.go:56] Caching tarball of preloaded images
	I1025 17:38:43.669593   65294 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 17:38:43.691119   65294 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1025 17:38:43.691177   65294 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1025 17:38:43.777066   65294 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 17:38:50.397579   65294 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1025 17:38:50.397765   65294 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1025 17:38:50.948990   65294 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 17:38:50.949212   65294 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/download-only-018000/config.json ...
	I1025 17:38:50.949236   65294 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/download-only-018000/config.json: {Name:mk5d3d30f82863b9a21ef499a1e39b6460adaf71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:38:50.950227   65294 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 17:38:50.950479   65294 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I1025 17:38:52.637279   65294 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-018000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-018000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-018000: exit status 85 (325.601209ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-018000 | jenkins | v1.31.2 | 25 Oct 23 17:38 PDT |          |
	|         | -p download-only-018000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-018000 | jenkins | v1.31.2 | 25 Oct 23 17:38 PDT |          |
	|         | -p download-only-018000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 17:38:55
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 17:38:55.600100   65328 out.go:296] Setting OutFile to fd 1 ...
	I1025 17:38:55.600370   65328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:38:55.600375   65328 out.go:309] Setting ErrFile to fd 2...
	I1025 17:38:55.600379   65328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:38:55.600552   65328 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	W1025 17:38:55.600644   65328 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17488-64832/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17488-64832/.minikube/config/config.json: no such file or directory
	I1025 17:38:55.601972   65328 out.go:303] Setting JSON to true
	I1025 17:38:55.624815   65328 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":31103,"bootTime":1698249632,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 17:38:55.624916   65328 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 17:38:55.646456   65328 out.go:97] [download-only-018000] minikube v1.31.2 on Darwin 14.0
	I1025 17:38:55.668931   65328 out.go:169] MINIKUBE_LOCATION=17488
	I1025 17:38:55.646655   65328 notify.go:220] Checking for updates...
	I1025 17:38:55.690280   65328 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:38:55.712335   65328 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 17:38:55.734009   65328 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:38:55.755067   65328 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	W1025 17:38:55.797273   65328 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 17:38:55.798002   65328 config.go:182] Loaded profile config "download-only-018000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1025 17:38:55.798089   65328 start.go:810] api.Load failed for download-only-018000: filestore "download-only-018000": Docker machine "download-only-018000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 17:38:55.798255   65328 driver.go:378] Setting default libvirt URI to qemu:///system
	W1025 17:38:55.798297   65328 start.go:810] api.Load failed for download-only-018000: filestore "download-only-018000": Docker machine "download-only-018000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 17:38:55.857430   65328 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 17:38:55.857548   65328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:38:55.959655   65328 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2023-10-26 00:38:55.945696378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:38:55.980948   65328 out.go:97] Using the docker driver based on existing profile
	I1025 17:38:55.980985   65328 start.go:298] selected driver: docker
	I1025 17:38:55.980996   65328 start.go:902] validating driver "docker" against &{Name:download-only-018000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-018000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:38:55.981315   65328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:38:56.083537   65328 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:false NGoroutines:58 SystemTime:2023-10-26 00:38:56.070643682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:38:56.086776   65328 cni.go:84] Creating CNI manager for ""
	I1025 17:38:56.086801   65328 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 17:38:56.086815   65328 start_flags.go:323] config:
	{Name:download-only-018000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:5891 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-018000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:38:56.108319   65328 out.go:97] Starting control plane node download-only-018000 in cluster download-only-018000
	I1025 17:38:56.108357   65328 cache.go:121] Beginning downloading kic base image for docker with docker
	I1025 17:38:56.129187   65328 out.go:97] Pulling base image ...
	I1025 17:38:56.129260   65328 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 17:38:56.129369   65328 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1025 17:38:56.179981   65328 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1025 17:38:56.180182   65328 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1025 17:38:56.180209   65328 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1025 17:38:56.180217   65328 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1025 17:38:56.180231   65328 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1025 17:38:56.185050   65328 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 17:38:56.185061   65328 cache.go:56] Caching tarball of preloaded images
	I1025 17:38:56.186172   65328 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 17:38:56.207452   65328 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1025 17:38:56.207478   65328 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1025 17:38:56.287135   65328 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4?checksum=md5:82104bbf889ff8b69d5c141ce86c05ac -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 17:39:01.450470   65328 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1025 17:39:01.450670   65328 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1025 17:39:02.075206   65328 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 17:39:02.075302   65328 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/download-only-018000/config.json ...
	I1025 17:39:02.075734   65328 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 17:39:02.076477   65328 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl
	I1025 17:39:02.544803   65328 out.go:169] 
	W1025 17:39:02.566960   65328 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/17488-64832/.minikube/cache/darwin/amd64/v1.28.3/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520 0x51d5520] Decompressors:map[bz2:0xc000721b50 gz:0xc000721b58 tar:0xc000721b00 tar.bz2:0xc000721b10 tar.gz:0xc000721b20 tar.xz:0xc000721b30 tar.zst:0xc000721b40 tbz2:0xc000721b10 tgz:0xc000721b20 txz:0xc000721b30 tzst:0xc000721b40 xz:0xc000721b60 zip:0xc000721b70 zst:0xc000721b68] Getters:map[file:0xc00078e6a0 http:0xc000c25270 https:0xc000c252c0] Dir:false ProgressListener:<nil> Insecure:false Disa
bleSymlinks:false Options:[]}: bad response code: 404
	W1025 17:39:02.566983   65328 out_reason.go:110] 
	W1025 17:39:02.590787   65328 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 17:39:02.612805   65328 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-018000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.33s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.65s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-018000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestOffline (41.2s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-886000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-886000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (38.50014719s)
helpers_test.go:175: Cleaning up "offline-docker-886000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-886000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-886000: (2.700059077s)
--- PASS: TestOffline (41.20s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-882000
addons_test.go:927: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-882000: exit status 85 (189.772775ms)

                                                
                                                
-- stdout --
	* Profile "addons-882000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-882000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-882000
addons_test.go:938: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-882000: exit status 85 (210.307075ms)

                                                
                                                
-- stdout --
	* Profile "addons-882000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-882000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (162.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-882000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-882000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m42.086756949s)
--- PASS: TestAddons/Setup (162.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.17s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8fcmt" [68ea382b-b5e0-4cdf-ae89-785194cd72eb] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011442451s
addons_test.go:840: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-882000
addons_test.go:840: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-882000: (6.157983304s)
--- PASS: TestAddons/parallel/InspektorGadget (11.17s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 4.041422ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-b5924" [ae457e89-7fce-4544-9da5-f39473c627e6] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.016005166s
addons_test.go:414: (dbg) Run:  kubectl --context addons-882000 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-darwin-amd64 -p addons-882000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.86s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.82s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.284426ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-zdwc7" [e963b051-6b38-4831-995a-a4fb5756a648] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.012655594s
addons_test.go:472: (dbg) Run:  kubectl --context addons-882000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-882000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.054593436s)
addons_test.go:489: (dbg) Run:  out/minikube-darwin-amd64 -p addons-882000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (87.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 62.209278ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-882000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-882000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6c7e9474-d719-42a7-b679-e23590d352a5] Pending
helpers_test.go:344: "task-pv-pod" [6c7e9474-d719-42a7-b679-e23590d352a5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6c7e9474-d719-42a7-b679-e23590d352a5] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.012998063s
addons_test.go:583: (dbg) Run:  kubectl --context addons-882000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-882000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-882000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-882000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-882000 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-882000 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-882000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-882000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [06abc82c-aa09-407d-9bbc-bb0d3140cb82] Pending
helpers_test.go:344: "task-pv-pod-restore" [06abc82c-aa09-407d-9bbc-bb0d3140cb82] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [06abc82c-aa09-407d-9bbc-bb0d3140cb82] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.014641066s
addons_test.go:625: (dbg) Run:  kubectl --context addons-882000 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-882000 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-882000 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-darwin-amd64 -p addons-882000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-darwin-amd64 -p addons-882000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.82237674s)
addons_test.go:641: (dbg) Run:  out/minikube-darwin-amd64 -p addons-882000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:641: (dbg) Done: out/minikube-darwin-amd64 -p addons-882000 addons disable volumesnapshots --alsologtostderr -v=1: (1.017891606s)
--- PASS: TestAddons/parallel/CSI (87.07s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-882000 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-882000 --alsologtostderr -v=1: (1.484075417s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-pc9qf" [df0dbb8d-6ded-4c9e-b308-c8da5bda7d96] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-pc9qf" [df0dbb8d-6ded-4c9e-b308-c8da5bda7d96] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.014471726s
--- PASS: TestAddons/parallel/Headlamp (14.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-5cckq" [074bca80-502f-4842-bbec-bb3977a4d491] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.01280151s
addons_test.go:859: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-882000
--- PASS: TestAddons/parallel/CloudSpanner (5.76s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.29s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-882000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-882000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-882000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [623392d7-f2f5-4032-9239-b34caa3e9e22] Pending
helpers_test.go:344: "test-local-path" [623392d7-f2f5-4032-9239-b34caa3e9e22] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [623392d7-f2f5-4032-9239-b34caa3e9e22] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [623392d7-f2f5-4032-9239-b34caa3e9e22] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.012023938s
addons_test.go:890: (dbg) Run:  kubectl --context addons-882000 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-darwin-amd64 -p addons-882000 ssh "cat /opt/local-path-provisioner/pvc-c86b16cc-852d-45fc-8956-065ea1b01617_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-882000 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-882000 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-darwin-amd64 -p addons-882000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-darwin-amd64 -p addons-882000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.21781086s)
--- PASS: TestAddons/parallel/LocalPath (54.29s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fhxqk" [9a6a7809-e56a-4e29-af2d-001ebfdc9716] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.01667556s
addons_test.go:954: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-882000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-882000 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-882000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.91s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-882000
addons_test.go:171: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-882000: (11.172394197s)
addons_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-882000
addons_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-882000
addons_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-882000
--- PASS: TestAddons/StoppedEnableDisable (11.91s)

                                                
                                    
x
+
TestCertOptions (26.49s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-824000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-824000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (23.157521586s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-824000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-824000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-824000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-824000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-824000: (2.504246865s)
--- PASS: TestCertOptions (26.49s)

                                                
                                    
x
+
TestCertExpiration (233.29s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-531000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-531000 --memory=2048 --cert-expiration=3m --driver=docker : (24.581792126s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-531000 --memory=2048 --cert-expiration=8760h --driver=docker 
E1025 18:19:28.596161   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:19:28.601801   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:19:28.612106   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:19:28.632548   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:19:28.672672   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:19:28.753092   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:19:28.914785   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:19:29.235052   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:19:29.875216   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:19:31.156669   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:19:33.718601   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:19:38.840086   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:19:49.082617   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:19:54.204120   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-531000 --memory=2048 --cert-expiration=8760h --driver=docker : (26.17437597s)
helpers_test.go:175: Cleaning up "cert-expiration-531000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-531000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-531000: (2.529658827s)
--- PASS: TestCertExpiration (233.29s)

                                                
                                    
x
+
TestDockerFlags (27.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-049000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-049000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (23.590121179s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-049000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-049000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-049000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-049000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-049000: (2.695272321s)
--- PASS: TestDockerFlags (27.11s)

                                                
                                    
x
+
TestForceSystemdFlag (29.76s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-557000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-557000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (26.545159604s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-557000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-557000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-557000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-557000: (2.75401518s)
--- PASS: TestForceSystemdFlag (29.76s)

                                                
                                    
x
+
TestForceSystemdEnv (28.84s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-477000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-477000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (25.655459943s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-477000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-477000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-477000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-477000: (2.665786804s)
--- PASS: TestForceSystemdEnv (28.84s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.28s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.28s)

                                                
                                    
x
+
TestErrorSpam/setup (22.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-797000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-797000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 --driver=docker : (22.124004249s)
--- PASS: TestErrorSpam/setup (22.12s)

                                                
                                    
x
+
TestErrorSpam/start (2.07s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 start --dry-run
--- PASS: TestErrorSpam/start (2.07s)

                                                
                                    
x
+
TestErrorSpam/status (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 status
--- PASS: TestErrorSpam/status (1.23s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (11.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 stop: (10.85266863s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-797000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-797000 stop
--- PASS: TestErrorSpam/stop (11.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17488-64832/.minikube/files/etc/test/nested/copy/65292/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-188000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-188000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (38.320965258s)
--- PASS: TestFunctional/serial/StartWithProxy (38.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.17s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-188000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-188000 --alsologtostderr -v=8: (40.173045955s)
functional_test.go:659: soft start took 40.173539337s for "functional-188000" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.17s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-188000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 cache add registry.k8s.io/pause:3.1: (1.799868812s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 cache add registry.k8s.io/pause:3.3: (1.708430713s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 cache add registry.k8s.io/pause:latest: (1.600087313s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-188000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local376011519/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 cache add minikube-local-cache-test:functional-188000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 cache add minikube-local-cache-test:functional-188000: (1.163486854s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 cache delete minikube-local-cache-test:functional-188000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-188000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-188000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (395.061399ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 cache reload: (1.161210434s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-188000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 17:46:51.099170   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:46:51.129944   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:46:51.142047   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:46:51.163058   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:46:51.203169   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:46:51.284054   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:46:51.444218   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:46:51.765142   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:46:52.406059   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:46:53.686261   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:46:56.248571   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:47:01.369071   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 17:47:11.611028   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-188000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.264199059s)
functional_test.go:757: restart took 39.26434372s for "functional-188000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-188000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 logs: (3.234127232s)
--- PASS: TestFunctional/serial/LogsCmd (3.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd3380516898/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd3380516898/001/logs.txt: (3.215601167s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.22s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.33s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-188000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-188000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-188000: exit status 115 (660.664023ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31582 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-188000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-188000 delete -f testdata/invalidsvc.yaml: (1.459395623s)
--- PASS: TestFunctional/serial/InvalidService (5.33s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-188000 config get cpus: exit status 14 (60.911161ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-188000 config get cpus: exit status 14 (63.308034ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-188000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-188000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 67757: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-188000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-188000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (757.383667ms)

                                                
                                                
-- stdout --
	* [functional-188000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 17:49:06.272764   67660 out.go:296] Setting OutFile to fd 1 ...
	I1025 17:49:06.273002   67660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:49:06.273009   67660 out.go:309] Setting ErrFile to fd 2...
	I1025 17:49:06.273014   67660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:49:06.273217   67660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 17:49:06.275223   67660 out.go:303] Setting JSON to false
	I1025 17:49:06.298142   67660 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":31714,"bootTime":1698249632,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 17:49:06.298245   67660 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 17:49:06.321942   67660 out.go:177] * [functional-188000] minikube v1.31.2 on Darwin 14.0
	I1025 17:49:06.384790   67660 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 17:49:06.362764   67660 notify.go:220] Checking for updates...
	I1025 17:49:06.426632   67660 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:49:06.468606   67660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 17:49:06.510782   67660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:49:06.531702   67660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 17:49:06.552673   67660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 17:49:06.574255   67660 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 17:49:06.574695   67660 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 17:49:06.637929   67660 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 17:49:06.638090   67660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:49:06.766910   67660 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 00:49:06.753723206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:49:06.790867   67660 out.go:177] * Using the docker driver based on existing profile
	I1025 17:49:06.848729   67660 start.go:298] selected driver: docker
	I1025 17:49:06.848748   67660 start.go:902] validating driver "docker" against &{Name:functional-188000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-188000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:49:06.848834   67660 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 17:49:06.872969   67660 out.go:177] 
	W1025 17:49:06.893916   67660 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 17:49:06.915005   67660 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-188000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-188000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-188000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (700.420528ms)

                                                
                                                
-- stdout --
	* [functional-188000] minikube v1.31.2 sur Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 17:49:07.748268   67704 out.go:296] Setting OutFile to fd 1 ...
	I1025 17:49:07.748468   67704 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:49:07.748474   67704 out.go:309] Setting ErrFile to fd 2...
	I1025 17:49:07.748478   67704 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 17:49:07.748710   67704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 17:49:07.750315   67704 out.go:303] Setting JSON to false
	I1025 17:49:07.774489   67704 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":31715,"bootTime":1698249632,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1025 17:49:07.774616   67704 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 17:49:07.796736   67704 out.go:177] * [functional-188000] minikube v1.31.2 sur Darwin 14.0
	I1025 17:49:07.838693   67704 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 17:49:07.838768   67704 notify.go:220] Checking for updates...
	I1025 17:49:07.897760   67704 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	I1025 17:49:07.939579   67704 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 17:49:07.960721   67704 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:49:07.981661   67704 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	I1025 17:49:08.023668   67704 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 17:49:08.045044   67704 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 17:49:08.045427   67704 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 17:49:08.102308   67704 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.24.2 (124339)
	I1025 17:49:08.102446   67704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 17:49:08.211351   67704 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:70 SystemTime:2023-10-26 00:49:08.199800195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6227828736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
	I1025 17:49:08.254291   67704 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1025 17:49:08.275338   67704 start.go:298] selected driver: docker
	I1025 17:49:08.275353   67704 start.go:902] validating driver "docker" against &{Name:functional-188000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-188000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 17:49:08.275417   67704 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 17:49:08.315183   67704 out.go:177] 
	W1025 17:49:08.336429   67704 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 17:49:08.357331   67704 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6d3f2cd5-53c8-4ab4-8e2e-3ea815bc540f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014343355s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-188000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-188000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-188000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-188000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [091e9eb8-5ce1-4123-87cb-f5a373f3a397] Pending
helpers_test.go:344: "sp-pod" [091e9eb8-5ce1-4123-87cb-f5a373f3a397] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [091e9eb8-5ce1-4123-87cb-f5a373f3a397] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.012018263s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-188000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-188000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-188000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [59414f35-13da-46a8-820f-e72ab41640fa] Pending
helpers_test.go:344: "sp-pod" [59414f35-13da-46a8-820f-e72ab41640fa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [59414f35-13da-46a8-820f-e72ab41640fa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.011824379s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-188000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.36s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh -n functional-188000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 cp functional-188000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd3312432703/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh -n functional-188000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (41.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-188000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-lnv65" [0c76a6dc-95f9-4098-8acc-508fc7299e8a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-lnv65" [0c76a6dc-95f9-4098-8acc-508fc7299e8a] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 35.122732973s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-188000 exec mysql-859648c796-lnv65 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-188000 exec mysql-859648c796-lnv65 -- mysql -ppassword -e "show databases;": exit status 1 (283.445963ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-188000 exec mysql-859648c796-lnv65 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-188000 exec mysql-859648c796-lnv65 -- mysql -ppassword -e "show databases;": exit status 1 (115.092956ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1025 17:48:13.055618   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-188000 exec mysql-859648c796-lnv65 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-188000 exec mysql-859648c796-lnv65 -- mysql -ppassword -e "show databases;": exit status 1 (115.499498ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-188000 exec mysql-859648c796-lnv65 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (41.63s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/65292/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "sudo cat /etc/test/nested/copy/65292/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/65292.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "sudo cat /etc/ssl/certs/65292.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/65292.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "sudo cat /usr/share/ca-certificates/65292.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/652922.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "sudo cat /etc/ssl/certs/652922.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/652922.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "sudo cat /usr/share/ca-certificates/652922.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-188000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-188000 ssh "sudo systemctl is-active crio": exit status 1 (568.03975ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 version -o=json --components: (1.26992755s)
--- PASS: TestFunctional/parallel/Version/components (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-188000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-188000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-188000
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-188000 image ls --format short --alsologtostderr:
I1025 17:49:18.480100   67960 out.go:296] Setting OutFile to fd 1 ...
I1025 17:49:18.480341   67960 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 17:49:18.480347   67960 out.go:309] Setting ErrFile to fd 2...
I1025 17:49:18.480351   67960 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 17:49:18.480556   67960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
I1025 17:49:18.481317   67960 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 17:49:18.481417   67960 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 17:49:18.481897   67960 cli_runner.go:164] Run: docker container inspect functional-188000 --format={{.State.Status}}
I1025 17:49:18.542378   67960 ssh_runner.go:195] Run: systemctl --version
I1025 17:49:18.542488   67960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
I1025 17:49:18.603335   67960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
I1025 17:49:18.694143   67960 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-188000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | b135667c98980 | 47.7MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| docker.io/library/mysql                     | 5.7               | 3b85be0b10d38 | 581MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 593aee2afb642 | 187MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-188000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-188000 | ea4890bb9b212 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.3           | 5374347291230 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.3           | 6d1b4fd1b182d | 60.1MB |
| registry.k8s.io/kube-controller-manager     | v1.28.3           | 10baa1ca17068 | 122MB  |
| registry.k8s.io/kube-proxy                  | v1.28.3           | bfc896cf80fba | 73.1MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-188000 image ls --format table --alsologtostderr:
I1025 17:49:19.524407   67978 out.go:296] Setting OutFile to fd 1 ...
I1025 17:49:19.524726   67978 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 17:49:19.524731   67978 out.go:309] Setting ErrFile to fd 2...
I1025 17:49:19.524735   67978 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 17:49:19.524957   67978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
I1025 17:49:19.525615   67978 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 17:49:19.525712   67978 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 17:49:19.526148   67978 cli_runner.go:164] Run: docker container inspect functional-188000 --format={{.State.Status}}
I1025 17:49:19.591488   67978 ssh_runner.go:195] Run: systemctl --version
I1025 17:49:19.591569   67978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
I1025 17:49:19.653650   67978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
I1025 17:49:19.744392   67978 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-188000 image ls --format json --alsologtostderr:
[{"id":"593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"60100000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-188000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"73100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"82e4c8a736a4fcf22b5ef
9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47700000"},{"id":"3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kube
rnetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"ea4890bb9b2124b3ddc56b422481e63a819dd43842f4c259fcb62ca8a431eeda","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-188000"],"size":"30"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"126000000"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"122000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"}]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-188000 image ls --format json --alsologtostderr:
I1025 17:49:18.827785   67966 out.go:296] Setting OutFile to fd 1 ...
I1025 17:49:18.828016   67966 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 17:49:18.828022   67966 out.go:309] Setting ErrFile to fd 2...
I1025 17:49:18.828026   67966 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 17:49:18.828245   67966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
I1025 17:49:18.828971   67966 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 17:49:18.829076   67966 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 17:49:18.829530   67966 cli_runner.go:164] Run: docker container inspect functional-188000 --format={{.State.Status}}
I1025 17:49:18.888262   67966 ssh_runner.go:195] Run: systemctl --version
I1025 17:49:18.888358   67966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
I1025 17:49:18.953610   67966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
I1025 17:49:19.045738   67966 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-188000 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-188000
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "60100000"
- id: 3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "122000000"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "73100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "126000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47700000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: ea4890bb9b2124b3ddc56b422481e63a819dd43842f4c259fcb62ca8a431eeda
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-188000
size: "30"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-188000 image ls --format yaml --alsologtostderr:
I1025 17:49:19.166084   67972 out.go:296] Setting OutFile to fd 1 ...
I1025 17:49:19.166358   67972 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 17:49:19.166364   67972 out.go:309] Setting ErrFile to fd 2...
I1025 17:49:19.166369   67972 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 17:49:19.166603   67972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
I1025 17:49:19.167343   67972 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 17:49:19.167455   67972 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 17:49:19.167926   67972 cli_runner.go:164] Run: docker container inspect functional-188000 --format={{.State.Status}}
I1025 17:49:19.229633   67972 ssh_runner.go:195] Run: systemctl --version
I1025 17:49:19.229712   67972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
I1025 17:49:19.293195   67972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
I1025 17:49:19.393244   67972 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-188000 ssh pgrep buildkitd: exit status 1 (467.174744ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image build -t localhost/my-image:functional-188000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 image build -t localhost/my-image:functional-188000 testdata/build --alsologtostderr: (2.910519317s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-188000 image build -t localhost/my-image:functional-188000 testdata/build --alsologtostderr:
I1025 17:49:20.333481   67994 out.go:296] Setting OutFile to fd 1 ...
I1025 17:49:20.334090   67994 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 17:49:20.334099   67994 out.go:309] Setting ErrFile to fd 2...
I1025 17:49:20.334104   67994 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 17:49:20.334348   67994 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
I1025 17:49:20.335029   67994 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 17:49:20.335792   67994 config.go:182] Loaded profile config "functional-188000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 17:49:20.336315   67994 cli_runner.go:164] Run: docker container inspect functional-188000 --format={{.State.Status}}
I1025 17:49:20.396437   67994 ssh_runner.go:195] Run: systemctl --version
I1025 17:49:20.396517   67994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-188000
I1025 17:49:20.459859   67994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56240 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/functional-188000/id_rsa Username:docker}
I1025 17:49:20.548221   67994 build_images.go:151] Building image from path: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.1915975413.tar
I1025 17:49:20.548344   67994 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 17:49:20.559768   67994 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1915975413.tar
I1025 17:49:20.565403   67994 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1915975413.tar: stat -c "%s %y" /var/lib/minikube/build/build.1915975413.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1915975413.tar': No such file or directory
I1025 17:49:20.565475   67994 ssh_runner.go:362] scp /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.1915975413.tar --> /var/lib/minikube/build/build.1915975413.tar (3072 bytes)
I1025 17:49:20.593171   67994 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1915975413
I1025 17:49:20.606054   67994 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1915975413 -xf /var/lib/minikube/build/build.1915975413.tar
I1025 17:49:20.639007   67994 docker.go:341] Building image: /var/lib/minikube/build/build.1915975413
I1025 17:49:20.639123   67994 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-188000 /var/lib/minikube/build/build.1915975413
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:a096791fa8748bc2ba6fa74346f5edb113a10f0a8b24a11fda3b135f2b21690c done
#8 naming to localhost/my-image:functional-188000 done
#8 DONE 0.0s
I1025 17:49:23.116947   67994 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-188000 /var/lib/minikube/build/build.1915975413: (2.477720158s)
I1025 17:49:23.117038   67994 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1915975413
I1025 17:49:23.137094   67994 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1915975413.tar
I1025 17:49:23.148063   67994 build_images.go:207] Built localhost/my-image:functional-188000 from /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.1915975413.tar
I1025 17:49:23.148086   67994 build_images.go:123] succeeded building to: functional-188000
I1025 17:49:23.148090   67994 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image ls
2023/10/25 17:49:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.859423115s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-188000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.94s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-188000 docker-env) && out/minikube-darwin-amd64 status -p functional-188000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-188000 docker-env) && out/minikube-darwin-amd64 status -p functional-188000": (1.323502226s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-188000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image load --daemon gcr.io/google-containers/addon-resizer:functional-188000 --alsologtostderr
E1025 17:47:32.092387   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 image load --daemon gcr.io/google-containers/addon-resizer:functional-188000 --alsologtostderr: (4.307022931s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image load --daemon gcr.io/google-containers/addon-resizer:functional-188000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 image load --daemon gcr.io/google-containers/addon-resizer:functional-188000 --alsologtostderr: (2.493214631s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.507058301s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-188000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image load --daemon gcr.io/google-containers/addon-resizer:functional-188000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 image load --daemon gcr.io/google-containers/addon-resizer:functional-188000 --alsologtostderr: (4.545501794s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image save gcr.io/google-containers/addon-resizer:functional-188000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 image save gcr.io/google-containers/addon-resizer:functional-188000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.08692436s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image rm gcr.io/google-containers/addon-resizer:functional-188000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.737354187s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-188000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 image save --daemon gcr.io/google-containers/addon-resizer:functional-188000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-188000 image save --daemon gcr.io/google-containers/addon-resizer:functional-188000 --alsologtostderr: (1.79770622s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-188000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-188000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-188000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-f9qjr" [2eaaedf5-75a3-4878-98c7-95aa7a66034e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-f9qjr" [2eaaedf5-75a3-4878-98c7-95aa7a66034e] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.016615725s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 service list -o json
functional_test.go:1493: Took "436.313219ms" to run "out/minikube-darwin-amd64 -p functional-188000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-188000 service --namespace=default --https --url hello-node: signal: killed (15.016556746s)

                                                
                                                
-- stdout --
	https://127.0.0.1:56495

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:56495
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-188000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-188000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-188000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-188000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 67443: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-188000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-188000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4f3bbe68-bec8-4df7-86f6-54215a1e901c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4f3bbe68-bec8-4df7-86f6-54215a1e901c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.014282269s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-188000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-188000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 67473: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-188000 service hello-node --url --format={{.IP}}: signal: killed (15.002676043s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-188000 service hello-node --url: signal: killed (15.00243063s)

                                                
                                                
-- stdout --
	http://127.0.0.1:56568

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:56568
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "400.683817ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "80.979548ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "401.015128ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "80.074227ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-188000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3599369112/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698281342975605000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3599369112/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698281342975605000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3599369112/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698281342975605000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3599369112/001/test-1698281342975605000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-188000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (391.572772ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 26 00:49 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 26 00:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 26 00:49 test-1698281342975605000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh cat /mount-9p/test-1698281342975605000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-188000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d295e521-7604-4d89-b490-6c3045cfb744] Pending
helpers_test.go:344: "busybox-mount" [d295e521-7604-4d89-b490-6c3045cfb744] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d295e521-7604-4d89-b490-6c3045cfb744] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d295e521-7604-4d89-b490-6c3045cfb744] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.013071815s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-188000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-188000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port3599369112/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-188000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port3246887376/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-188000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (442.537022ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-188000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port3246887376/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-188000 ssh "sudo umount -f /mount-9p": exit status 1 (402.07593ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-188000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-188000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port3246887376/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-188000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3427169461/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-188000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3427169461/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-188000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3427169461/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-188000 ssh "findmnt -T" /mount1: exit status 1 (664.026563ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-188000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-188000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-188000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3427169461/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-188000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3427169461/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-188000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3427169461/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.05s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-188000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-188000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-188000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (22.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-099000 --driver=docker 
E1025 17:49:34.978916   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-099000 --driver=docker : (22.076765167s)
--- PASS: TestImageBuild/serial/Setup (22.08s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-099000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-099000: (1.61912483s)
--- PASS: TestImageBuild/serial/NormalBuild (1.62s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-099000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-099000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.76s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-099000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-305000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E1025 17:57:35.237890   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 17:58:02.960307   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-305000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (36.862605503s)
--- PASS: TestJSONOutput/start/Command (36.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-305000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-305000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-305000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-305000 --output=json --user=testUser: (10.938060938s)
--- PASS: TestJSONOutput/stop/Command (10.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-952000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-952000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (393.584826ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7af65cf5-6b20-4765-8f45-a78d646f12da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-952000] minikube v1.31.2 on Darwin 14.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f00a82b-4ff9-48c1-925d-d33b429833bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17488"}}
	{"specversion":"1.0","id":"f540db49-643b-426d-a93d-de1df1c1792f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig"}}
	{"specversion":"1.0","id":"cc159c98-b674-42f4-99fc-aed6d7c6dbd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"8a06eb05-3e27-4a3c-b15a-6d42b89904ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"709f00a1-25ae-4dff-beab-b009b00b430e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube"}}
	{"specversion":"1.0","id":"77bff6c3-191e-4265-af6c-ce7dd94b7657","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f68a7ded-9865-4077-9e7b-a6952b4ea412","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-952000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-952000
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-401000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-401000 --network=: (22.288472835s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-401000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-401000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-401000: (2.504580637s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.85s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-629000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-629000 --network=bridge: (21.893857757s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-629000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-629000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-629000: (2.317088089s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.26s)

                                                
                                    
x
+
TestKicExistingNetwork (24.55s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-791000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-791000 --network=existing-network: (21.878812254s)
helpers_test.go:175: Cleaning up "existing-network-791000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-791000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-791000: (2.322647044s)
--- PASS: TestKicExistingNetwork (24.55s)

                                                
                                    
x
+
TestKicCustomSubnet (24.37s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-874000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-874000 --subnet=192.168.60.0/24: (22.019577948s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-874000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-874000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-874000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-874000: (2.296735337s)
--- PASS: TestKicCustomSubnet (24.37s)

                                                
                                    
x
+
TestKicStaticIP (24.94s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-737000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-737000 --static-ip=192.168.200.200: (22.239169183s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-737000 ip
helpers_test.go:175: Cleaning up "static-ip-737000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-737000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-737000: (2.469223276s)
--- PASS: TestKicStaticIP (24.94s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (51.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-230000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-230000 --driver=docker : (21.752208022s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-232000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-232000 --driver=docker : (22.795856693s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-230000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-232000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-232000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-232000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-232000: (2.558055807s)
helpers_test.go:175: Cleaning up "first-230000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-230000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-230000: (2.525475683s)
--- PASS: TestMinikubeProfile (51.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-034000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-034000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.415872951s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-034000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-049000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-049000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.441462278s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-049000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.08s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-034000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-034000 --alsologtostderr -v=5: (2.0765752s)
--- PASS: TestMountStart/serial/DeleteFirst (2.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-049000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-049000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-049000: (1.566014199s)
--- PASS: TestMountStart/serial/Stop (1.57s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.46s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-049000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-049000: (7.45576745s)
--- PASS: TestMountStart/serial/RestartStopped (8.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-049000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (51.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-971000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E1025 18:01:51.112463   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 18:02:35.246857   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-971000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (50.515920236s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (51.31s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-971000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-971000 -v 3 --alsologtostderr: (14.041787698s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 status --alsologtostderr
multinode_test.go:116: (dbg) Done: out/minikube-darwin-amd64 -p multinode-971000 status --alsologtostderr: (1.052868441s)
--- PASS: TestMultiNode/serial/AddNode (15.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Done: out/minikube-darwin-amd64 -p multinode-971000 status --output json --alsologtostderr: (1.013116774s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 cp testdata/cp-test.txt multinode-971000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 cp multinode-971000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile4072635735/001/cp-test_multinode-971000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 cp multinode-971000:/home/docker/cp-test.txt multinode-971000-m02:/home/docker/cp-test_multinode-971000_multinode-971000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000-m02 "sudo cat /home/docker/cp-test_multinode-971000_multinode-971000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 cp multinode-971000:/home/docker/cp-test.txt multinode-971000-m03:/home/docker/cp-test_multinode-971000_multinode-971000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000-m03 "sudo cat /home/docker/cp-test_multinode-971000_multinode-971000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 cp testdata/cp-test.txt multinode-971000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 cp multinode-971000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile4072635735/001/cp-test_multinode-971000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 cp multinode-971000-m02:/home/docker/cp-test.txt multinode-971000:/home/docker/cp-test_multinode-971000-m02_multinode-971000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000 "sudo cat /home/docker/cp-test_multinode-971000-m02_multinode-971000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 cp multinode-971000-m02:/home/docker/cp-test.txt multinode-971000-m03:/home/docker/cp-test_multinode-971000-m02_multinode-971000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000-m03 "sudo cat /home/docker/cp-test_multinode-971000-m02_multinode-971000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 cp testdata/cp-test.txt multinode-971000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 cp multinode-971000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile4072635735/001/cp-test_multinode-971000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 cp multinode-971000-m03:/home/docker/cp-test.txt multinode-971000:/home/docker/cp-test_multinode-971000-m03_multinode-971000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000 "sudo cat /home/docker/cp-test_multinode-971000-m03_multinode-971000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 cp multinode-971000-m03:/home/docker/cp-test.txt multinode-971000-m02:/home/docker/cp-test_multinode-971000-m03_multinode-971000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 ssh -n multinode-971000-m02 "sudo cat /home/docker/cp-test_multinode-971000-m03_multinode-971000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-amd64 -p multinode-971000 node stop m03: (1.500027702s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-971000 status: exit status 7 (718.909996ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-971000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-971000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-971000 status --alsologtostderr: exit status 7 (729.476002ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-971000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-971000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:04:47.765765   71164 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:04:47.765973   71164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:04:47.765981   71164 out.go:309] Setting ErrFile to fd 2...
	I1025 18:04:47.765985   71164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:04:47.766183   71164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 18:04:47.766383   71164 out.go:303] Setting JSON to false
	I1025 18:04:47.766406   71164 mustload.go:65] Loading cluster: multinode-971000
	I1025 18:04:47.766457   71164 notify.go:220] Checking for updates...
	I1025 18:04:47.766722   71164 config.go:182] Loaded profile config "multinode-971000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:04:47.766735   71164 status.go:255] checking status of multinode-971000 ...
	I1025 18:04:47.767175   71164 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:04:47.821308   71164 status.go:330] multinode-971000 host status = "Running" (err=<nil>)
	I1025 18:04:47.821347   71164 host.go:66] Checking if "multinode-971000" exists ...
	I1025 18:04:47.821603   71164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000
	I1025 18:04:47.872785   71164 host.go:66] Checking if "multinode-971000" exists ...
	I1025 18:04:47.873063   71164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 18:04:47.873124   71164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:04:47.926965   71164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57079 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000/id_rsa Username:docker}
	I1025 18:04:48.014932   71164 ssh_runner.go:195] Run: systemctl --version
	I1025 18:04:48.020050   71164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:04:48.031357   71164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-971000
	I1025 18:04:48.085005   71164 kubeconfig.go:92] found "multinode-971000" server: "https://127.0.0.1:57083"
	I1025 18:04:48.085031   71164 api_server.go:166] Checking apiserver status ...
	I1025 18:04:48.085070   71164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:04:48.097331   71164 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2278/cgroup
	W1025 18:04:48.107785   71164 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2278/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:04:48.107851   71164 ssh_runner.go:195] Run: ls
	I1025 18:04:48.112377   71164 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57083/healthz ...
	I1025 18:04:48.118816   71164 api_server.go:279] https://127.0.0.1:57083/healthz returned 200:
	ok
	I1025 18:04:48.118833   71164 status.go:421] multinode-971000 apiserver status = Running (err=<nil>)
	I1025 18:04:48.118843   71164 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 18:04:48.118855   71164 status.go:255] checking status of multinode-971000-m02 ...
	I1025 18:04:48.119105   71164 cli_runner.go:164] Run: docker container inspect multinode-971000-m02 --format={{.State.Status}}
	I1025 18:04:48.176226   71164 status.go:330] multinode-971000-m02 host status = "Running" (err=<nil>)
	I1025 18:04:48.176257   71164 host.go:66] Checking if "multinode-971000-m02" exists ...
	I1025 18:04:48.176518   71164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971000-m02
	I1025 18:04:48.229784   71164 host.go:66] Checking if "multinode-971000-m02" exists ...
	I1025 18:04:48.230051   71164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 18:04:48.230102   71164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971000-m02
	I1025 18:04:48.283927   71164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57119 SSHKeyPath:/Users/jenkins/minikube-integration/17488-64832/.minikube/machines/multinode-971000-m02/id_rsa Username:docker}
	I1025 18:04:48.370536   71164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:04:48.381737   71164 status.go:257] multinode-971000-m02 status: &{Name:multinode-971000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 18:04:48.381763   71164 status.go:255] checking status of multinode-971000-m03 ...
	I1025 18:04:48.382137   71164 cli_runner.go:164] Run: docker container inspect multinode-971000-m03 --format={{.State.Status}}
	I1025 18:04:48.436167   71164 status.go:330] multinode-971000-m03 host status = "Stopped" (err=<nil>)
	I1025 18:04:48.436192   71164 status.go:343] host is not running, skipping remaining checks
	I1025 18:04:48.436200   71164 status.go:257] multinode-971000-m03 status: &{Name:multinode-971000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.95s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-darwin-amd64 -p multinode-971000 node start m03 --alsologtostderr: (12.613383812s)
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (105.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-971000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-971000
multinode_test.go:290: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-971000: (13.737574735s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-971000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-971000 --wait=true -v=8 --alsologtostderr: (1m31.26072592s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-971000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (105.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 node delete m03
E1025 18:06:51.121358   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
multinode_test.go:394: (dbg) Done: out/minikube-darwin-amd64 -p multinode-971000 node delete m03: (5.15737989s)
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (12.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 stop
multinode_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p multinode-971000 stop: (12.415290317s)
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-971000 status: exit status 7 (162.622089ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-971000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-971000 status --alsologtostderr: exit status 7 (161.901025ms)

                                                
                                                
-- stdout --
	multinode-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-971000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:07:05.861978   71587 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:07:05.862189   71587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:07:05.862194   71587 out.go:309] Setting ErrFile to fd 2...
	I1025 18:07:05.862198   71587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:07:05.862413   71587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17488-64832/.minikube/bin
	I1025 18:07:05.862614   71587 out.go:303] Setting JSON to false
	I1025 18:07:05.862636   71587 mustload.go:65] Loading cluster: multinode-971000
	I1025 18:07:05.862669   71587 notify.go:220] Checking for updates...
	I1025 18:07:05.862959   71587 config.go:182] Loaded profile config "multinode-971000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:07:05.862971   71587 status.go:255] checking status of multinode-971000 ...
	I1025 18:07:05.863401   71587 cli_runner.go:164] Run: docker container inspect multinode-971000 --format={{.State.Status}}
	I1025 18:07:05.915426   71587 status.go:330] multinode-971000 host status = "Stopped" (err=<nil>)
	I1025 18:07:05.915445   71587 status.go:343] host is not running, skipping remaining checks
	I1025 18:07:05.915451   71587 status.go:257] multinode-971000 status: &{Name:multinode-971000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 18:07:05.915475   71587 status.go:255] checking status of multinode-971000-m02 ...
	I1025 18:07:05.915712   71587 cli_runner.go:164] Run: docker container inspect multinode-971000-m02 --format={{.State.Status}}
	I1025 18:07:05.967155   71587 status.go:330] multinode-971000-m02 host status = "Stopped" (err=<nil>)
	I1025 18:07:05.967191   71587 status.go:343] host is not running, skipping remaining checks
	I1025 18:07:05.967199   71587 status.go:257] multinode-971000-m02 status: &{Name:multinode-971000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (12.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-971000 --wait=true -v=8 --alsologtostderr --driver=docker 
E1025 18:07:35.255853   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-971000 --wait=true -v=8 --alsologtostderr --driver=docker : (56.908021339s)
multinode_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-971000 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.75s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-971000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-971000-m02 --driver=docker 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-971000-m02 --driver=docker : exit status 14 (499.246142ms)

                                                
                                                
-- stdout --
	* [multinode-971000-m02] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-971000-m02' is duplicated with machine name 'multinode-971000-m02' in profile 'multinode-971000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-971000-m03 --driver=docker 
multinode_test.go:460: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-971000-m03 --driver=docker : (23.392931168s)
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-971000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-971000: exit status 80 (474.914664ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-971000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-971000-m03 already exists in multinode-971000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-971000-m03
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-971000-m03: (2.577536182s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.01s)

                                                
                                    
x
+
TestPreload (144.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-244000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E1025 18:08:58.340346   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-244000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m16.501111112s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-244000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-244000 image pull gcr.io/k8s-minikube/busybox: (1.364736237s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-244000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-244000: (10.832755325s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-244000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-244000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (52.949743555s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-244000 image list
helpers_test.go:175: Cleaning up "test-preload-244000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-244000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-244000: (2.621024459s)
--- PASS: TestPreload (144.57s)

                                                
                                    
x
+
TestScheduledStopUnix (95.9s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-838000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-838000 --memory=2048 --driver=docker : (21.771834589s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-838000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-838000 -n scheduled-stop-838000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-838000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-838000 --cancel-scheduled
E1025 18:11:51.112965   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-838000 -n scheduled-stop-838000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-838000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-838000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1025 18:12:35.269877   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-838000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-838000: exit status 7 (116.380233ms)

                                                
                                                
-- stdout --
	scheduled-stop-838000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-838000 -n scheduled-stop-838000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-838000 -n scheduled-stop-838000: exit status 7 (108.891343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-838000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-838000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-838000: (2.211507406s)
--- PASS: TestScheduledStopUnix (95.90s)

                                                
                                    
x
+
TestSkaffold (121.65s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe385972888 version
skaffold_test.go:63: skaffold version: v2.8.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-790000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-790000 --memory=2600 --driver=docker : (22.497774421s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe385972888 run --minikube-profile skaffold-790000 --kube-context skaffold-790000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe385972888 run --minikube-profile skaffold-790000 --kube-context skaffold-790000 --status-check=true --port-forward=false --interactive=false: (1m23.788561575s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-f67678b7f-5lvvp" [bc28715f-d78c-4092-b4ca-38ab43362110] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.014884154s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-74ff7cc7c7-kww46" [3e9cced3-0243-4135-be27-430b2f875c32] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.011720691s
helpers_test.go:175: Cleaning up "skaffold-790000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-790000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-790000: (3.153594821s)
--- PASS: TestSkaffold (121.65s)

                                                
                                    
x
+
TestInsufficientStorage (10.84s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-515000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-515000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (7.793004966s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cff4ac72-fd64-4ba1-aed6-f94600543ea0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-515000] minikube v1.31.2 on Darwin 14.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"76064000-93b5-4158-acda-acc70be928da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17488"}}
	{"specversion":"1.0","id":"131b20bb-8007-4a88-9044-85b8c007ca6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig"}}
	{"specversion":"1.0","id":"c1833caf-19ba-4c47-ab92-8caed561ba0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"d0ba2f61-f48c-443a-bdd8-d8728b198b7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fc50f364-a42c-4296-b510-3315ce530f3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube"}}
	{"specversion":"1.0","id":"80c0c0bc-3cc2-4928-93b6-f4372e98acd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1427d340-8b75-41b9-9d8b-364c85298465","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"10fc269b-2fc8-4345-a98b-d1b6b0a57ac9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a536fbcc-3316-459d-866b-dd7c1d4f32f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9743466-17b7-4ee3-992c-0e2543b28822","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"53784db5-e6b2-4fb3-84b2-7184507c08b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-515000 in cluster insufficient-storage-515000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e7204a31-d91a-44a0-90fc-d3c848cd90ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9c30cbc-d271-448f-9e6e-421b284316e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8127ddc9-9539-4b0b-9a63-60d090a14774","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-515000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-515000 --output=json --layout=cluster: exit status 7 (375.058136ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-515000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-515000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:14:49.926736   73135 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-515000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-515000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-515000 --output=json --layout=cluster: exit status 7 (374.831979ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-515000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-515000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:14:50.302350   73148 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-515000" does not appear in /Users/jenkins/minikube-integration/17488-64832/kubeconfig
	E1025 18:14:50.312963   73148 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/insufficient-storage-515000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-515000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-515000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-515000: (2.296996122s)
--- PASS: TestInsufficientStorage (10.84s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.41s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin
- MINIKUBE_LOCATION=17488
- KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4294929375/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4294929375/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4294929375/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4294929375/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.41s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.44s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin
- MINIKUBE_LOCATION=17488
- KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3889750795/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3889750795/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3889750795/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3889750795/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-830000
version_upgrade_test.go:219: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-830000: (3.459435253s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.46s)

                                                
                                    
x
+
TestPause/serial/Start (75.17s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-753000 --memory=2048 --install-addons=false --wait=all --driver=docker 
E1025 18:20:50.525600   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:21:51.128359   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-753000 --memory=2048 --install-addons=false --wait=all --driver=docker : (1m15.169414416s)
--- PASS: TestPause/serial/Start (75.17s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.25s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-753000 --alsologtostderr -v=1 --driver=docker 
E1025 18:22:12.448480   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:22:35.262827   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-753000 --alsologtostderr -v=1 --driver=docker : (37.237009159s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.25s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-753000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-753000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-753000 --output=json --layout=cluster: exit status 2 (393.027203ms)

                                                
                                                
-- stdout --
	{"Name":"pause-753000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-753000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-753000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-753000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.52s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-753000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-753000 --alsologtostderr -v=5: (2.520404945s)
--- PASS: TestPause/serial/DeletePaused (2.52s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-753000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-753000: exit status 1 (57.034703ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-753000: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-202000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-202000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (426.95977ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-202000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17488
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17488-64832/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17488-64832/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-202000 --driver=docker 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-202000 --driver=docker : (23.707382924s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-202000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-202000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-202000 --no-kubernetes --driver=docker : (15.226307604s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-202000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-202000 status -o json: exit status 2 (380.916204ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-202000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-202000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-202000: (2.270149128s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-202000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-202000 --no-kubernetes --driver=docker : (6.465304303s)
--- PASS: TestNoKubernetes/serial/Start (6.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-202000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-202000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (361.526464ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (34.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (19.676542025s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (15.061130572s)
--- PASS: TestNoKubernetes/serial/ProfileList (34.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-202000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-202000: (1.557188204s)
--- PASS: TestNoKubernetes/serial/Stop (1.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-202000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-202000 --driver=docker : (7.421762673s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-202000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-202000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (359.107215ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (38.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
E1025 18:24:28.604066   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:24:56.295152   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (38.81050356s)
--- PASS: TestNetworkPlugins/group/auto/Start (38.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-143000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-143000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fcc4l" [880301bc-d5ff-4ebe-85ff-b934a78d9097] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fcc4l" [880301bc-d5ff-4ebe-85ff-b934a78d9097] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.026180169s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-143000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
E1025 18:25:38.350552   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (52.025688328s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-h6n9l" [f9ead8c2-c77b-4893-a922-f41cfa1467a6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.020249013s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-143000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-143000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-d56zt" [3e4ddcd4-a9bb-44ae-8881-543438e2e9b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-d56zt" [3e4ddcd4-a9bb-44ae-8881-543438e2e9b3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.012907715s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-143000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
E1025 18:27:35.273943   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m16.85312627s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (53.810144594s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hcg9t" [fa5a0402-a9c8-4a5a-a612-97c35937ac01] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.024155228s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-143000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-143000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z7sck" [b73b8c5f-0019-489c-ab42-b8190e02bd2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z7sck" [b73b8c5f-0019-489c-ab42-b8190e02bd2e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.012246047s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-143000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (39.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (39.533389205s)
--- PASS: TestNetworkPlugins/group/false/Start (39.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-143000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-143000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-l2rfj" [6ce8dd33-637b-4aec-a1b5-6278f81ed77a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-l2rfj" [6ce8dd33-637b-4aec-a1b5-6278f81ed77a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.013760683s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-143000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-143000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-143000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sfwvw" [1a3ab7c1-bef4-4dc3-a662-a50de223ed59] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sfwvw" [1a3ab7c1-bef4-4dc3-a662-a50de223ed59] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.009762176s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (38.448039937s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-143000 exec deployment/netcat -- nslookup kubernetes.default
E1025 18:30:00.503255   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
E1025 18:30:00.508391   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
E1025 18:30:00.518972   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
E1025 18:30:00.539081   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
E1025 18:30:00.579268   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1025 18:30:00.660645   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1025 18:30:00.820880   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (38.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (38.174730071s)
--- PASS: TestNetworkPlugins/group/flannel/Start (38.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-143000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-143000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tkt2s" [b704a7e0-8a42-4c60-a17d-734e6c912699] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tkt2s" [b704a7e0-8a42-4c60-a17d-734e6c912699] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.012109577s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-143000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (11.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-skr6g" [32b55c4b-98b3-4e3b-b4d8-e30bc49adcc7] Pending: Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni]) / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
helpers_test.go:344: "kube-flannel-ds-skr6g" [32b55c4b-98b3-4e3b-b4d8-e30bc49adcc7] Pending / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
helpers_test.go:344: "kube-flannel-ds-skr6g" [32b55c4b-98b3-4e3b-b4d8-e30bc49adcc7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 11.020842268s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (11.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (1m16.89216702s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-143000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-143000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5q6b7" [352d1da7-08a9-4269-8b8c-941291b560f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5q6b7" [352d1da7-08a9-4269-8b8c-941291b560f2] Running
E1025 18:31:22.428098   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.01340764s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-143000 exec deployment/netcat -- nslookup kubernetes.default
E1025 18:31:26.610999   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1025 18:31:26.616785   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
E1025 18:31:26.627790   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
E1025 18:31:26.647979   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
E1025 18:31:26.688094   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1025 18:31:26.768518   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (74.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
E1025 18:32:07.573795   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-143000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (1m14.710612163s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (74.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-143000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-143000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wz7zk" [380de5ac-704d-468d-9b07-43b534959fec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wz7zk" [380de5ac-704d-468d-9b07-43b534959fec] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.010264276s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-143000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-143000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-143000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dwns2" [639dd9c2-0503-46fd-86aa-bbd3723e3466] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dwns2" [639dd9c2-0503-46fd-86aa-bbd3723e3466] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.013740215s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-143000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-143000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-622000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.28.3
E1025 18:33:44.355340   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/calico-143000/client.crt: no such file or directory
E1025 18:34:04.836224   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/calico-143000/client.crt: no such file or directory
E1025 18:34:10.459556   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
E1025 18:34:12.871806   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:34:12.877192   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:34:12.887401   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:34:12.907457   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:34:12.947672   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:34:13.027857   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:34:13.188008   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:34:13.508378   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:34:14.148671   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:34:15.429987   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:34:17.990240   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:34:23.111827   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:34:28.625311   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:34:33.352289   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:34:45.797998   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/calico-143000/client.crt: no such file or directory
E1025 18:34:47.437619   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:34:47.443456   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:34:47.454416   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:34:47.474529   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:34:47.514661   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:34:47.594915   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:34:47.757069   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:34:48.078359   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:34:48.719348   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:34:49.999573   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:34:52.559844   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:34:53.833749   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-622000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.28.3: (1m14.461470594s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-622000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ab5d5af5-08f1-4931-97c6-2be46e6e3f91] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1025 18:34:57.680209   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [ab5d5af5-08f1-4931-97c6-2be46e6e3f91] Running
E1025 18:35:00.513084   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.019694468s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-622000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-622000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-622000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.124792207s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-622000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-622000 --alsologtostderr -v=3
E1025 18:35:07.921188   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-622000 --alsologtostderr -v=3: (10.937138906s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000: exit status 7 (111.67981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-622000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (311.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-622000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.28.3
E1025 18:35:28.167814   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:35:28.173303   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:35:28.183938   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:35:28.196747   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
E1025 18:35:28.204564   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:35:28.245121   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:35:28.327306   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:35:28.404200   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:35:28.487561   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:35:28.809727   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:35:29.450187   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:35:30.731545   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:35:33.293954   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:35:34.797409   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:35:38.414301   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:35:48.656072   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:35:51.678186   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:36:02.606083   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:02.611456   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:02.621923   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:02.642333   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:02.682453   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:02.763094   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:02.923686   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:03.245150   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:03.886422   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:05.167006   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:07.721081   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/calico-143000/client.crt: no such file or directory
E1025 18:36:07.727487   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:09.137041   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:36:09.365790   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:36:12.848281   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:23.089424   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:26.619787   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
E1025 18:36:34.236681   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 18:36:43.570688   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:36:50.099933   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:36:51.157283   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
E1025 18:36:54.304636   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
E1025 18:36:56.720342   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-622000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.28.3: (5m10.818881898s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000
E1025 18:40:28.176806   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (311.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-479000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-479000 --alsologtostderr -v=3: (1.570598441s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-479000 -n old-k8s-version-479000: exit status 7 (109.063991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-479000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (21.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mksn9" [afc1222d-70a5-4d4b-9bfe-f04c64f60bf4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mksn9" [afc1222d-70a5-4d4b-9bfe-f04c64f60bf4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.017881287s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (21.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mksn9" [afc1222d-70a5-4d4b-9bfe-f04c64f60bf4] Running
E1025 18:40:50.759377   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012214907s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-622000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-622000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-622000 --alsologtostderr -v=1
E1025 18:40:55.867740   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-622000 -n no-preload-622000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-622000 -n no-preload-622000: exit status 2 (440.753695ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-622000 -n no-preload-622000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-622000 -n no-preload-622000: exit status 2 (404.323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-622000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-622000 -n no-preload-622000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-622000 -n no-preload-622000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (37.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-488000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.3
E1025 18:41:02.616636   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:41:26.634884   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
E1025 18:41:30.308527   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-488000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.3: (37.952407354s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (37.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-488000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aebb10ba-a0a5-43ce-90b5-a7482fb6f629] Pending
helpers_test.go:344: "busybox" [aebb10ba-a0a5-43ce-90b5-a7482fb6f629] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [aebb10ba-a0a5-43ce-90b5-a7482fb6f629] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.020825856s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-488000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-488000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-488000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.151949596s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-488000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-488000 --alsologtostderr -v=3
E1025 18:41:51.175984   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-488000 --alsologtostderr -v=3: (10.996086979s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-488000 -n embed-certs-488000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-488000 -n embed-certs-488000: exit status 7 (107.744538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-488000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (313.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-488000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.3
E1025 18:42:18.394219   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 18:42:20.975645   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:42:35.311418   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
E1025 18:42:48.659564   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
E1025 18:43:06.931375   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:43:23.904034   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/calico-143000/client.crt: no such file or directory
E1025 18:43:34.615484   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kubenet-143000/client.crt: no such file or directory
E1025 18:44:12.899613   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/custom-flannel-143000/client.crt: no such file or directory
E1025 18:44:28.652707   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E1025 18:44:47.466847   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
E1025 18:44:55.639610   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:44:55.645038   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:44:55.655213   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:44:55.675611   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:44:55.715967   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:44:55.797932   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:44:55.958092   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:44:56.278229   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:44:56.918621   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:44:58.199590   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:45:00.541764   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
E1025 18:45:00.760333   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:45:05.881463   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:45:16.122165   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:45:28.196589   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/enable-default-cni-143000/client.crt: no such file or directory
E1025 18:45:36.604600   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:46:02.634167   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
E1025 18:46:17.566453   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/no-preload-622000/client.crt: no such file or directory
E1025 18:46:23.588979   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
E1025 18:46:26.648655   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
E1025 18:46:51.185933   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/addons-882000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-488000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.28.3: (5m12.526851673s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-488000 -n embed-certs-488000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (313.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-brb2d" [58923b25-293c-404d-a655-0186d8c66f6a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1025 18:47:20.984574   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/bridge-143000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-brb2d" [58923b25-293c-404d-a655-0186d8c66f6a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.018491142s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-brb2d" [58923b25-293c-404d-a655-0186d8c66f6a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018431955s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-488000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-488000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-488000 --alsologtostderr -v=1
E1025 18:47:35.320432   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/functional-188000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-488000 -n embed-certs-488000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-488000 -n embed-certs-488000: exit status 2 (399.400427ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-488000 -n embed-certs-488000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-488000 -n embed-certs-488000: exit status 2 (404.250706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-488000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-488000 -n embed-certs-488000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-488000 -n embed-certs-488000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-555000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.3
E1025 18:47:49.695205   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/kindnet-143000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-555000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.3: (1m16.888484989s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-555000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d54c2a20-6f87-4185-bb17-9d8d5d8aeccb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d54c2a20-6f87-4185-bb17-9d8d5d8aeccb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.020552883s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-555000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-555000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-555000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.132806096s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-555000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-555000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-555000 --alsologtostderr -v=3: (10.922479408s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000: exit status 7 (110.302858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-555000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (311.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-555000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-555000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.28.3: (5m10.428643772s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (311.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (20.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gptbw" [acafb160-7f9b-4e26-bbc8-a375482a2fef] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gptbw" [acafb160-7f9b-4e26-bbc8-a375482a2fef] Running
E1025 18:54:47.486149   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/false-143000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.018122985s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (20.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gptbw" [acafb160-7f9b-4e26-bbc8-a375482a2fef] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011414198s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-555000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-555000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-555000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000: exit status 2 (422.094366ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000: exit status 2 (396.323432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-555000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000
E1025 18:55:00.558915   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/auto-143000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-555000 -n default-k8s-diff-port-555000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-343000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-343000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.28.3: (36.534932149s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-343000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-343000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.140602286s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-343000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-343000 --alsologtostderr -v=3: (11.073399289s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-343000 -n newest-cni-343000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-343000 -n newest-cni-343000: exit status 7 (110.43068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-343000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-343000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.28.3
E1025 18:56:02.654182   65292 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17488-64832/.minikube/profiles/flannel-143000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-343000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.28.3: (25.703455953s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-343000 -n newest-cni-343000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-343000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-343000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-343000 -n newest-cni-343000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-343000 -n newest-cni-343000: exit status 2 (402.131847ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-343000 -n newest-cni-343000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-343000 -n newest-cni-343000: exit status 2 (402.117285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-343000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-343000 -n newest-cni-343000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-343000 -n newest-cni-343000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                    

Test skip (19/321)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 59.283931ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-dlc7x" [8f28c2da-5c1b-46c1-b778-d315dbb56cb2] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019377541s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xcjmn" [54d145a1-5c41-46a0-bc89-a7504660fce7] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013748462s
addons_test.go:339: (dbg) Run:  kubectl --context addons-882000 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-882000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-882000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.834455257s)
addons_test.go:354: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (13.98s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-882000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-882000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-882000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c5da168f-83ae-4b9a-9923-db21c38b7419] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c5da168f-83ae-4b9a-9923-db21c38b7419] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.101019894s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p addons-882000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:281: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.33s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-188000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-188000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-mxq68" [683d4c9e-78ca-44b6-b23c-e27404bbc07a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-mxq68" [683d4c9e-78ca-44b6-b23c-e27404bbc07a] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.015462642s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-143000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-143000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-143000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-143000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143000"

                                                
                                                
----------------------- debugLogs end: cilium-143000 [took: 5.95776022s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-143000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-143000
--- SKIP: TestNetworkPlugins/group/cilium (6.42s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-361000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-361000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.40s)

                                                
                                    
Copied to clipboard