Test Report: Docker_macOS 15642

                    
                      4cf467cecc4d49355139c24bc1420f3978a367dd:2023-01-14:27426
                    
                

Test fail (16/296)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (268.32s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-021549 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0114 02:16:43.047532    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:18:59.197021    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:19:19.880571    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:19.886306    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:19.898305    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:19.920482    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:19.961638    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:20.043006    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:20.203444    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:20.525609    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:21.167284    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:22.447830    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:25.009127    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:26.891142    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:19:30.131489    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:19:40.372235    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:20:00.854790    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-021549 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m28.292269516s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-021549] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-021549 in cluster ingress-addon-legacy-021549
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.21 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 02:15:49.568610    5508 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:15:49.568807    5508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:15:49.568813    5508 out.go:309] Setting ErrFile to fd 2...
	I0114 02:15:49.568817    5508 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:15:49.568924    5508 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 02:15:49.569452    5508 out.go:303] Setting JSON to false
	I0114 02:15:49.588247    5508 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":923,"bootTime":1673690426,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 02:15:49.588329    5508 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 02:15:49.609910    5508 out.go:177] * [ingress-addon-legacy-021549] minikube v1.28.0 on Darwin 13.0.1
	I0114 02:15:49.653885    5508 notify.go:220] Checking for updates...
	I0114 02:15:49.675684    5508 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 02:15:49.696683    5508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:15:49.718956    5508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:15:49.741960    5508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:15:49.763647    5508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	I0114 02:15:49.784884    5508 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 02:15:49.846989    5508 docker.go:138] docker version: linux-20.10.21
	I0114 02:15:49.847121    5508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:15:49.986360    5508 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-14 10:15:49.896274223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:15:50.062088    5508 out.go:177] * Using the docker driver based on user configuration
	I0114 02:15:50.083980    5508 start.go:294] selected driver: docker
	I0114 02:15:50.084007    5508 start.go:838] validating driver "docker" against <nil>
	I0114 02:15:50.084031    5508 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 02:15:50.087838    5508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:15:50.226451    5508 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-14 10:15:50.137455719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:15:50.226550    5508 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 02:15:50.226690    5508 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 02:15:50.248536    5508 out.go:177] * Using Docker Desktop driver with root privileges
	I0114 02:15:50.270022    5508 cni.go:95] Creating CNI manager for ""
	I0114 02:15:50.270055    5508 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 02:15:50.270075    5508 start_flags.go:319] config:
	{Name:ingress-addon-legacy-021549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-021549 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:15:50.292428    5508 out.go:177] * Starting control plane node ingress-addon-legacy-021549 in cluster ingress-addon-legacy-021549
	I0114 02:15:50.334283    5508 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 02:15:50.356123    5508 out.go:177] * Pulling base image ...
	I0114 02:15:50.398236    5508 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0114 02:15:50.398291    5508 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 02:15:50.455865    5508 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 02:15:50.455890    5508 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 02:15:50.497540    5508 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0114 02:15:50.497562    5508 cache.go:57] Caching tarball of preloaded images
	I0114 02:15:50.497944    5508 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0114 02:15:50.542054    5508 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0114 02:15:50.563116    5508 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0114 02:15:50.798974    5508 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0114 02:16:07.892557    5508 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0114 02:16:07.892751    5508 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0114 02:16:08.509481    5508 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0114 02:16:08.509760    5508 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/config.json ...
	I0114 02:16:08.509794    5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/config.json: {Name:mk43621aa12416a727dfcfd39a1b8a9c87a82a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:16:08.510104    5508 cache.go:193] Successfully downloaded all kic artifacts
	I0114 02:16:08.510129    5508 start.go:364] acquiring machines lock for ingress-addon-legacy-021549: {Name:mk059708f1ee422c2c43c60a6ec8d2062f575157 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 02:16:08.510303    5508 start.go:368] acquired machines lock for "ingress-addon-legacy-021549" in 163.922µs
	I0114 02:16:08.510329    5508 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-021549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-021549 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 02:16:08.510401    5508 start.go:125] createHost starting for "" (driver="docker")
	I0114 02:16:08.564442    5508 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0114 02:16:08.564773    5508 start.go:159] libmachine.API.Create for "ingress-addon-legacy-021549" (driver="docker")
	I0114 02:16:08.564826    5508 client.go:168] LocalClient.Create starting
	I0114 02:16:08.565030    5508 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem
	I0114 02:16:08.565115    5508 main.go:134] libmachine: Decoding PEM data...
	I0114 02:16:08.565146    5508 main.go:134] libmachine: Parsing certificate...
	I0114 02:16:08.565244    5508 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem
	I0114 02:16:08.565313    5508 main.go:134] libmachine: Decoding PEM data...
	I0114 02:16:08.565330    5508 main.go:134] libmachine: Parsing certificate...
	I0114 02:16:08.566278    5508 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-021549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0114 02:16:08.624768    5508 cli_runner.go:211] docker network inspect ingress-addon-legacy-021549 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0114 02:16:08.624878    5508 network_create.go:280] running [docker network inspect ingress-addon-legacy-021549] to gather additional debugging logs...
	I0114 02:16:08.624901    5508 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-021549
	W0114 02:16:08.678546    5508 cli_runner.go:211] docker network inspect ingress-addon-legacy-021549 returned with exit code 1
	I0114 02:16:08.678575    5508 network_create.go:283] error running [docker network inspect ingress-addon-legacy-021549]: docker network inspect ingress-addon-legacy-021549: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-021549
	I0114 02:16:08.678598    5508 network_create.go:285] output of [docker network inspect ingress-addon-legacy-021549]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-021549
	
	** /stderr **
	I0114 02:16:08.678713    5508 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 02:16:08.733500    5508 network.go:277] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b8f418] misses:0}
	I0114 02:16:08.733540    5508 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 02:16:08.733556    5508 network_create.go:123] attempt to create docker network ingress-addon-legacy-021549 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0114 02:16:08.733650    5508 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-021549 ingress-addon-legacy-021549
	I0114 02:16:08.829530    5508 network_create.go:107] docker network ingress-addon-legacy-021549 192.168.49.0/24 created
	I0114 02:16:08.829567    5508 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-021549" container
	I0114 02:16:08.829696    5508 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0114 02:16:08.883708    5508 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-021549 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-021549 --label created_by.minikube.sigs.k8s.io=true
	I0114 02:16:08.938418    5508 oci.go:103] Successfully created a docker volume ingress-addon-legacy-021549
	I0114 02:16:08.938546    5508 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-021549-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-021549 --entrypoint /usr/bin/test -v ingress-addon-legacy-021549:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0114 02:16:09.349830    5508 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-021549
	I0114 02:16:09.349899    5508 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0114 02:16:09.349918    5508 kic.go:190] Starting extracting preloaded images to volume ...
	I0114 02:16:09.350053    5508 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-021549:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0114 02:16:15.602885    5508 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-021549:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (6.252627335s)
	I0114 02:16:15.602905    5508 kic.go:199] duration metric: took 6.252899 seconds to extract preloaded images to volume
	I0114 02:16:15.603033    5508 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0114 02:16:15.770136    5508 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-021549 --name ingress-addon-legacy-021549 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-021549 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-021549 --network ingress-addon-legacy-021549 --ip 192.168.49.2 --volume ingress-addon-legacy-021549:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0114 02:16:16.116719    5508 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-021549 --format={{.State.Running}}
	I0114 02:16:16.175634    5508 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-021549 --format={{.State.Status}}
	I0114 02:16:16.237436    5508 cli_runner.go:164] Run: docker exec ingress-addon-legacy-021549 stat /var/lib/dpkg/alternatives/iptables
	I0114 02:16:16.357371    5508 oci.go:144] the created container "ingress-addon-legacy-021549" has a running status.
	I0114 02:16:16.357400    5508 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa...
	I0114 02:16:16.429911    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0114 02:16:16.430002    5508 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0114 02:16:16.538539    5508 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-021549 --format={{.State.Status}}
	I0114 02:16:16.596624    5508 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0114 02:16:16.596644    5508 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-021549 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0114 02:16:16.702136    5508 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-021549 --format={{.State.Status}}
	I0114 02:16:16.759921    5508 machine.go:88] provisioning docker machine ...
	I0114 02:16:16.759973    5508 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-021549"
	I0114 02:16:16.760088    5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
	I0114 02:16:16.816116    5508 main.go:134] libmachine: Using SSH client type: native
	I0114 02:16:16.816309    5508 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 50531 <nil> <nil>}
	I0114 02:16:16.816326    5508 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-021549 && echo "ingress-addon-legacy-021549" | sudo tee /etc/hostname
	I0114 02:16:16.943306    5508 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-021549
	
	I0114 02:16:16.943399    5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
	I0114 02:16:17.000510    5508 main.go:134] libmachine: Using SSH client type: native
	I0114 02:16:17.000670    5508 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 50531 <nil> <nil>}
	I0114 02:16:17.000688    5508 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-021549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-021549/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-021549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 02:16:17.118880    5508 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 02:16:17.118907    5508 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
	I0114 02:16:17.118932    5508 ubuntu.go:177] setting up certificates
	I0114 02:16:17.118940    5508 provision.go:83] configureAuth start
	I0114 02:16:17.119034    5508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-021549
	I0114 02:16:17.175300    5508 provision.go:138] copyHostCerts
	I0114 02:16:17.175347    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 02:16:17.175426    5508 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
	I0114 02:16:17.175433    5508 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 02:16:17.175543    5508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
	I0114 02:16:17.175710    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 02:16:17.175753    5508 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
	I0114 02:16:17.175757    5508 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 02:16:17.175825    5508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
	I0114 02:16:17.175956    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 02:16:17.175991    5508 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
	I0114 02:16:17.175996    5508 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 02:16:17.176062    5508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
	I0114 02:16:17.176192    5508 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-021549 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-021549]
	I0114 02:16:17.260975    5508 provision.go:172] copyRemoteCerts
	I0114 02:16:17.261032    5508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 02:16:17.261098    5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
	I0114 02:16:17.318655    5508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50531 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa Username:docker}
	I0114 02:16:17.405443    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0114 02:16:17.405541    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0114 02:16:17.422345    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0114 02:16:17.422439    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 02:16:17.438909    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0114 02:16:17.439001    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 02:16:17.456102    5508 provision.go:86] duration metric: configureAuth took 337.145539ms
	I0114 02:16:17.456115    5508 ubuntu.go:193] setting minikube options for container-runtime
	I0114 02:16:17.456280    5508 config.go:180] Loaded profile config "ingress-addon-legacy-021549": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0114 02:16:17.456350    5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
	I0114 02:16:17.513824    5508 main.go:134] libmachine: Using SSH client type: native
	I0114 02:16:17.513988    5508 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 50531 <nil> <nil>}
	I0114 02:16:17.514005    5508 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 02:16:17.632737    5508 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 02:16:17.632760    5508 ubuntu.go:71] root file system type: overlay
	I0114 02:16:17.632933    5508 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 02:16:17.633032    5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
	I0114 02:16:17.689557    5508 main.go:134] libmachine: Using SSH client type: native
	I0114 02:16:17.689731    5508 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 50531 <nil> <nil>}
	I0114 02:16:17.689779    5508 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 02:16:17.816223    5508 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 02:16:17.816337    5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
	I0114 02:16:17.872992    5508 main.go:134] libmachine: Using SSH client type: native
	I0114 02:16:17.873145    5508 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 50531 <nil> <nil>}
	I0114 02:16:17.873158    5508 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 02:16:18.465964    5508 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-25 18:00:04.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 10:16:17.814414255 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0114 02:16:18.465986    5508 machine.go:91] provisioned docker machine in 1.706010308s
	I0114 02:16:18.465992    5508 client.go:171] LocalClient.Create took 9.901011827s
	I0114 02:16:18.466009    5508 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-021549" took 9.901091208s
	I0114 02:16:18.466019    5508 start.go:300] post-start starting for "ingress-addon-legacy-021549" (driver="docker")
	I0114 02:16:18.466025    5508 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 02:16:18.466100    5508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 02:16:18.466163    5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
	I0114 02:16:18.523743    5508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50531 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa Username:docker}
	I0114 02:16:18.610660    5508 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 02:16:18.614261    5508 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 02:16:18.614280    5508 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 02:16:18.614289    5508 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 02:16:18.614295    5508 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 02:16:18.614305    5508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
	I0114 02:16:18.614412    5508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
	I0114 02:16:18.614597    5508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
	I0114 02:16:18.614604    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> /etc/ssl/certs/27282.pem
	I0114 02:16:18.614810    5508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 02:16:18.622075    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:16:18.639333    5508 start.go:303] post-start completed in 173.302438ms
	I0114 02:16:18.639888    5508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-021549
	I0114 02:16:18.696828    5508 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/config.json ...
	I0114 02:16:18.697269    5508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 02:16:18.697333    5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
	I0114 02:16:18.754033    5508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50531 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa Username:docker}
	I0114 02:16:18.838178    5508 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 02:16:18.842737    5508 start.go:128] duration metric: createHost completed in 10.332173272s
	I0114 02:16:18.842754    5508 start.go:83] releasing machines lock for "ingress-addon-legacy-021549", held for 10.332287068s
	I0114 02:16:18.842856    5508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-021549
	I0114 02:16:18.899155    5508 ssh_runner.go:195] Run: cat /version.json
	I0114 02:16:18.899182    5508 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0114 02:16:18.899236    5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
	I0114 02:16:18.899272    5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
	I0114 02:16:18.959908    5508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50531 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa Username:docker}
	I0114 02:16:18.959926    5508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50531 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa Username:docker}
	I0114 02:16:19.043049    5508 ssh_runner.go:195] Run: systemctl --version
	I0114 02:16:19.316319    5508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 02:16:19.326456    5508 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 02:16:19.326522    5508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 02:16:19.335843    5508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 02:16:19.348894    5508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 02:16:19.420605    5508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 02:16:19.488960    5508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:16:19.552634    5508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 02:16:19.750139    5508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:16:19.778791    5508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:16:19.829766    5508 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.21 ...
	I0114 02:16:19.829989    5508 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-021549 dig +short host.docker.internal
	I0114 02:16:19.934513    5508 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 02:16:19.934640    5508 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 02:16:19.939154    5508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:16:19.949032    5508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
	I0114 02:16:20.005549    5508 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0114 02:16:20.005639    5508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 02:16:20.029705    5508 docker.go:613] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0114 02:16:20.029721    5508 docker.go:543] Images already preloaded, skipping extraction
	I0114 02:16:20.029829    5508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 02:16:20.053675    5508 docker.go:613] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0114 02:16:20.053701    5508 cache_images.go:84] Images are preloaded, skipping loading
	I0114 02:16:20.053799    5508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 02:16:20.123036    5508 cni.go:95] Creating CNI manager for ""
	I0114 02:16:20.123051    5508 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 02:16:20.123067    5508 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 02:16:20.123083    5508 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-021549 NodeName:ingress-addon-legacy-021549 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 02:16:20.123212    5508 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-021549"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 02:16:20.123298    5508 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-021549 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-021549 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 02:16:20.123371    5508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0114 02:16:20.131349    5508 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 02:16:20.131424    5508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 02:16:20.138933    5508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0114 02:16:20.151755    5508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0114 02:16:20.164673    5508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I0114 02:16:20.177574    5508 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0114 02:16:20.181360    5508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:16:20.191011    5508 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549 for IP: 192.168.49.2
	I0114 02:16:20.191147    5508 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
	I0114 02:16:20.191218    5508 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
	I0114 02:16:20.191268    5508 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/client.key
	I0114 02:16:20.191295    5508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/client.crt with IP's: []
	I0114 02:16:20.333668    5508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/client.crt ...
	I0114 02:16:20.333681    5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/client.crt: {Name:mk84abce9af1b89be3a255209fde1b99bb8c0a08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:16:20.333994    5508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/client.key ...
	I0114 02:16:20.334002    5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/client.key: {Name:mkd635305ba708619c27a171239eb62e5058521a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:16:20.334210    5508 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key.dd3b5fb2
	I0114 02:16:20.334253    5508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0114 02:16:20.393935    5508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt.dd3b5fb2 ...
	I0114 02:16:20.393944    5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt.dd3b5fb2: {Name:mk2a745774da1cc1a8385e74128b2bf2cb76adb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:16:20.394164    5508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key.dd3b5fb2 ...
	I0114 02:16:20.394172    5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key.dd3b5fb2: {Name:mke955028d483a9e517264459bc3dfa9777cd029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:16:20.394367    5508 certs.go:320] copying /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt
	I0114 02:16:20.394540    5508 certs.go:324] copying /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key
	I0114 02:16:20.394712    5508 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.key
	I0114 02:16:20.394733    5508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.crt with IP's: []
	I0114 02:16:20.597156    5508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.crt ...
	I0114 02:16:20.597165    5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.crt: {Name:mk5790178e92ab6a43073067029e3e1ecad8a3eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:16:20.597431    5508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.key ...
	I0114 02:16:20.597439    5508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.key: {Name:mk6ec9aa2687075972c72cdd91b14bf36e1ceaa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:16:20.597769    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0114 02:16:20.597803    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0114 02:16:20.597827    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0114 02:16:20.597859    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0114 02:16:20.597907    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0114 02:16:20.597952    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0114 02:16:20.598013    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0114 02:16:20.598034    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0114 02:16:20.598188    5508 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
	W0114 02:16:20.598276    5508 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
	I0114 02:16:20.598324    5508 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 02:16:20.598381    5508 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
	I0114 02:16:20.598444    5508 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
	I0114 02:16:20.598485    5508 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
	I0114 02:16:20.598555    5508 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:16:20.598623    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> /usr/share/ca-certificates/27282.pem
	I0114 02:16:20.598679    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:16:20.598699    5508 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem -> /usr/share/ca-certificates/2728.pem
	I0114 02:16:20.599190    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 02:16:20.617477    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0114 02:16:20.634274    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 02:16:20.651033    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/ingress-addon-legacy-021549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0114 02:16:20.668237    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 02:16:20.685131    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 02:16:20.701794    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 02:16:20.718818    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 02:16:20.735896    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
	I0114 02:16:20.753024    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 02:16:20.770004    5508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
	I0114 02:16:20.787044    5508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 02:16:20.799538    5508 ssh_runner.go:195] Run: openssl version
	I0114 02:16:20.805058    5508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
	I0114 02:16:20.813041    5508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
	I0114 02:16:20.816951    5508 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 02:16:20.817022    5508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
	I0114 02:16:20.822470    5508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
	I0114 02:16:20.830432    5508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
	I0114 02:16:20.838493    5508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
	I0114 02:16:20.842295    5508 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 02:16:20.842350    5508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
	I0114 02:16:20.847656    5508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 02:16:20.855635    5508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 02:16:20.863498    5508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:16:20.867443    5508 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:16:20.867487    5508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:16:20.872876    5508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 02:16:20.880982    5508 kubeadm.go:396] StartCluster: {Name:ingress-addon-legacy-021549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-021549 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:16:20.881112    5508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 02:16:20.904091    5508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 02:16:20.911776    5508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 02:16:20.918932    5508 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 02:16:20.919001    5508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 02:16:20.926410    5508 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 02:16:20.926432    5508 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 02:16:20.974583    5508 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I0114 02:16:20.974639    5508 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 02:16:21.265850    5508 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 02:16:21.265971    5508 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 02:16:21.266051    5508 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 02:16:21.485730    5508 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 02:16:21.486271    5508 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 02:16:21.486317    5508 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0114 02:16:21.558805    5508 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 02:16:21.601974    5508 out.go:204]   - Generating certificates and keys ...
	I0114 02:16:21.602053    5508 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 02:16:21.602144    5508 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 02:16:21.604735    5508 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0114 02:16:21.801044    5508 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0114 02:16:22.011731    5508 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0114 02:16:22.204268    5508 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0114 02:16:22.314876    5508 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0114 02:16:22.315057    5508 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-021549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0114 02:16:22.474058    5508 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0114 02:16:22.474185    5508 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-021549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0114 02:16:22.541108    5508 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0114 02:16:22.734579    5508 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0114 02:16:22.864708    5508 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0114 02:16:22.864777    5508 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 02:16:23.090639    5508 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 02:16:23.169573    5508 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 02:16:23.371720    5508 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 02:16:23.566389    5508 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 02:16:23.566783    5508 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 02:16:23.610331    5508 out.go:204]   - Booting up control plane ...
	I0114 02:16:23.610572    5508 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 02:16:23.610723    5508 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 02:16:23.610867    5508 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 02:16:23.611005    5508 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 02:16:23.611258    5508 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 02:17:03.576892    5508 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 02:17:03.578031    5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:17:03.578234    5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:17:08.580183    5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:17:08.580396    5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:17:18.582121    5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:17:18.582340    5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:17:38.584131    5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:17:38.584353    5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:18:18.586273    5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:18:18.586493    5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:18:18.586513    5508 kubeadm.go:317] 
	I0114 02:18:18.586563    5508 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I0114 02:18:18.586611    5508 kubeadm.go:317] 		timed out waiting for the condition
	I0114 02:18:18.586622    5508 kubeadm.go:317] 
	I0114 02:18:18.586684    5508 kubeadm.go:317] 	This error is likely caused by:
	I0114 02:18:18.586744    5508 kubeadm.go:317] 		- The kubelet is not running
	I0114 02:18:18.586864    5508 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0114 02:18:18.586879    5508 kubeadm.go:317] 
	I0114 02:18:18.586975    5508 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0114 02:18:18.587006    5508 kubeadm.go:317] 		- 'systemctl status kubelet'
	I0114 02:18:18.587037    5508 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I0114 02:18:18.587042    5508 kubeadm.go:317] 
	I0114 02:18:18.587147    5508 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0114 02:18:18.587258    5508 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0114 02:18:18.587278    5508 kubeadm.go:317] 
	I0114 02:18:18.587368    5508 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0114 02:18:18.587417    5508 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I0114 02:18:18.587512    5508 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I0114 02:18:18.587546    5508 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I0114 02:18:18.587552    5508 kubeadm.go:317] 
	I0114 02:18:18.589747    5508 kubeadm.go:317] W0114 10:16:20.973569     955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0114 02:18:18.589810    5508 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0114 02:18:18.589901    5508 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
	I0114 02:18:18.589997    5508 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 02:18:18.590106    5508 kubeadm.go:317] W0114 10:16:23.571276     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0114 02:18:18.590205    5508 kubeadm.go:317] W0114 10:16:23.572040     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0114 02:18:18.590263    5508 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0114 02:18:18.590319    5508 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0114 02:18:18.590511    5508 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-021549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-021549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0114 10:16:20.973569     955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0114 10:16:23.571276     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0114 10:16:23.572040     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-021549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-021549 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0114 10:16:20.973569     955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0114 10:16:23.571276     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0114 10:16:23.572040     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0114 02:18:18.590543    5508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0114 02:18:19.004655    5508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 02:18:19.014498    5508 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 02:18:19.014563    5508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 02:18:19.021863    5508 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 02:18:19.021890    5508 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 02:18:19.070041    5508 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I0114 02:18:19.070085    5508 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 02:18:19.363161    5508 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 02:18:19.363246    5508 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 02:18:19.363324    5508 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 02:18:19.585374    5508 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 02:18:19.593368    5508 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 02:18:19.593403    5508 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0114 02:18:19.657620    5508 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 02:18:19.679105    5508 out.go:204]   - Generating certificates and keys ...
	I0114 02:18:19.679206    5508 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 02:18:19.679275    5508 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 02:18:19.679350    5508 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 02:18:19.679415    5508 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0114 02:18:19.679497    5508 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0114 02:18:19.679550    5508 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0114 02:18:19.679601    5508 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0114 02:18:19.679689    5508 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0114 02:18:19.679773    5508 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 02:18:19.679839    5508 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 02:18:19.679881    5508 kubeadm.go:317] [certs] Using the existing "sa" key
	I0114 02:18:19.679958    5508 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 02:18:19.841344    5508 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 02:18:20.086877    5508 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 02:18:20.153567    5508 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 02:18:20.210152    5508 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 02:18:20.210820    5508 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 02:18:20.232473    5508 out.go:204]   - Booting up control plane ...
	I0114 02:18:20.232605    5508 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 02:18:20.232668    5508 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 02:18:20.232756    5508 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 02:18:20.232826    5508 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 02:18:20.232978    5508 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 02:19:00.221626    5508 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 02:19:00.222576    5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:19:00.222808    5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:19:05.223409    5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:19:05.223577    5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:19:15.225765    5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:19:15.225996    5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:19:35.226508    5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:19:35.226666    5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:20:15.229417    5508 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:20:15.229641    5508 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:20:15.229662    5508 kubeadm.go:317] 
	I0114 02:20:15.229706    5508 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I0114 02:20:15.229758    5508 kubeadm.go:317] 		timed out waiting for the condition
	I0114 02:20:15.229775    5508 kubeadm.go:317] 
	I0114 02:20:15.229824    5508 kubeadm.go:317] 	This error is likely caused by:
	I0114 02:20:15.229859    5508 kubeadm.go:317] 		- The kubelet is not running
	I0114 02:20:15.229981    5508 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0114 02:20:15.229995    5508 kubeadm.go:317] 
	I0114 02:20:15.230087    5508 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0114 02:20:15.230136    5508 kubeadm.go:317] 		- 'systemctl status kubelet'
	I0114 02:20:15.230187    5508 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I0114 02:20:15.230202    5508 kubeadm.go:317] 
	I0114 02:20:15.230310    5508 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0114 02:20:15.230427    5508 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0114 02:20:15.230442    5508 kubeadm.go:317] 
	I0114 02:20:15.230553    5508 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0114 02:20:15.230621    5508 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I0114 02:20:15.230708    5508 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I0114 02:20:15.230750    5508 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I0114 02:20:15.230762    5508 kubeadm.go:317] 
	I0114 02:20:15.232923    5508 kubeadm.go:317] W0114 10:18:19.068847    3439 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0114 02:20:15.232989    5508 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0114 02:20:15.233096    5508 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
	I0114 02:20:15.233189    5508 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 02:20:15.233290    5508 kubeadm.go:317] W0114 10:18:20.214759    3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0114 02:20:15.233403    5508 kubeadm.go:317] W0114 10:18:20.215665    3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0114 02:20:15.233470    5508 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0114 02:20:15.233523    5508 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0114 02:20:15.233560    5508 kubeadm.go:398] StartCluster complete in 3m54.34908771s
	I0114 02:20:15.233657    5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 02:20:15.256731    5508 logs.go:274] 0 containers: []
	W0114 02:20:15.256745    5508 logs.go:276] No container was found matching "kube-apiserver"
	I0114 02:20:15.256823    5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 02:20:15.280428    5508 logs.go:274] 0 containers: []
	W0114 02:20:15.280442    5508 logs.go:276] No container was found matching "etcd"
	I0114 02:20:15.280525    5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 02:20:15.303199    5508 logs.go:274] 0 containers: []
	W0114 02:20:15.303212    5508 logs.go:276] No container was found matching "coredns"
	I0114 02:20:15.303296    5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 02:20:15.326883    5508 logs.go:274] 0 containers: []
	W0114 02:20:15.326895    5508 logs.go:276] No container was found matching "kube-scheduler"
	I0114 02:20:15.326980    5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 02:20:15.349760    5508 logs.go:274] 0 containers: []
	W0114 02:20:15.349773    5508 logs.go:276] No container was found matching "kube-proxy"
	I0114 02:20:15.349859    5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 02:20:15.373266    5508 logs.go:274] 0 containers: []
	W0114 02:20:15.373282    5508 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 02:20:15.373365    5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 02:20:15.395934    5508 logs.go:274] 0 containers: []
	W0114 02:20:15.395947    5508 logs.go:276] No container was found matching "storage-provisioner"
	I0114 02:20:15.396038    5508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 02:20:15.418996    5508 logs.go:274] 0 containers: []
	W0114 02:20:15.419008    5508 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 02:20:15.419018    5508 logs.go:123] Gathering logs for container status ...
	I0114 02:20:15.419027    5508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 02:20:17.473489    5508 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054417949s)
	I0114 02:20:17.473633    5508 logs.go:123] Gathering logs for kubelet ...
	I0114 02:20:17.473640    5508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 02:20:17.512397    5508 logs.go:123] Gathering logs for dmesg ...
	I0114 02:20:17.512409    5508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 02:20:17.525714    5508 logs.go:123] Gathering logs for describe nodes ...
	I0114 02:20:17.525725    5508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 02:20:17.578861    5508 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 02:20:17.578875    5508 logs.go:123] Gathering logs for Docker ...
	I0114 02:20:17.578881    5508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0114 02:20:17.594083    5508 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0114 10:18:19.068847    3439 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0114 10:18:20.214759    3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0114 10:18:20.215665    3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0114 02:20:17.594109    5508 out.go:239] * 
	* 
	W0114 02:20:17.594222    5508 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0114 10:18:19.068847    3439 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0114 10:18:20.214759    3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0114 10:18:20.215665    3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0114 10:18:19.068847    3439 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0114 10:18:20.214759    3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0114 10:18:20.215665    3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 02:20:17.594240    5508 out.go:239] * 
	* 
	W0114 02:20:17.594862    5508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 02:20:17.659571    5508 out.go:177] 
	W0114 02:20:17.723725    5508 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0114 10:18:19.068847    3439 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0114 10:18:20.214759    3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0114 10:18:20.215665    3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0114 10:18:19.068847    3439 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0114 10:18:20.214759    3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0114 10:18:20.215665    3439 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 02:20:17.723871    5508 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0114 02:20:17.723946    5508 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0114 02:20:17.745778    5508 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-021549 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (268.32s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-021549 addons enable ingress --alsologtostderr -v=5
E0114 02:20:41.816634    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-021549 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.144960871s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 02:20:17.898421    5826 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:20:17.899162    5826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:20:17.899172    5826 out.go:309] Setting ErrFile to fd 2...
	I0114 02:20:17.899179    5826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:20:17.899436    5826 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 02:20:17.921687    5826 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0114 02:20:17.943502    5826 config.go:180] Loaded profile config "ingress-addon-legacy-021549": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0114 02:20:17.943521    5826 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-021549"
	I0114 02:20:17.943529    5826 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-021549"
	I0114 02:20:17.943823    5826 host.go:66] Checking if "ingress-addon-legacy-021549" exists ...
	I0114 02:20:17.944338    5826 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-021549 --format={{.State.Status}}
	I0114 02:20:18.022965    5826 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0114 02:20:18.045152    5826 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0114 02:20:18.066745    5826 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0114 02:20:18.088060    5826 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0114 02:20:18.110201    5826 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0114 02:20:18.110241    5826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15613 bytes)
	I0114 02:20:18.110420    5826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
	I0114 02:20:18.168373    5826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50531 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa Username:docker}
	I0114 02:20:18.261572    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:20:18.313523    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:18.313544    5826 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:18.591982    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:20:18.648301    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:18.648325    5826 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:19.190825    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:20:19.244678    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:19.244699    5826 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:19.902040    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:20:19.954694    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:19.954715    5826 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:20.748232    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:20:20.802598    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:20.802616    5826 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:21.973879    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:20:22.027076    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:22.027092    5826 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:24.281491    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:20:24.337531    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:24.337548    5826 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:25.948741    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:20:26.003220    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:26.003237    5826 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:28.808432    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:20:28.862837    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:28.862852    5826 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:32.688825    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:20:32.744644    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:32.744661    5826 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:40.442428    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:20:40.494105    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:40.494121    5826 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:55.132190    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:20:55.185111    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:20:55.185130    5826 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:23.542144    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:21:23.597135    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:23.597151    5826 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:46.767130    5826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0114 02:21:46.820611    5826 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:46.820645    5826 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-021549"
	I0114 02:21:46.842403    5826 out.go:177] * Verifying ingress addon...
	I0114 02:21:46.866641    5826 out.go:177] 
	W0114 02:21:46.888240    5826 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-021549" does not exist: client config: context "ingress-addon-legacy-021549" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-021549" does not exist: client config: context "ingress-addon-legacy-021549" does not exist]
	W0114 02:21:46.888272    5826 out.go:239] * 
	* 
	W0114 02:21:46.892163    5826 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 02:21:46.913301    5826 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-021549
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-021549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554",
	        "Created": "2023-01-14T10:16:15.823624477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T10:16:16.109120728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554/hosts",
	        "LogPath": "/var/lib/docker/containers/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554-json.log",
	        "Name": "/ingress-addon-legacy-021549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-021549:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-021549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ae66db941a82a4dcc1c9e80c74d4c39dab9942dd3d97ea5009111aa48d877925-init/diff:/var/lib/docker/overlay2/74c9e0d36b5b0c73e7df7f4bce3bd0c3d02cf9dc383bffd6fbcff44769e0e62a/diff:/var/lib/docker/overlay2/ba601a6c163e2d067928a6364b090a9785c3dd2470d90823ce10e62a47aa569f/diff:/var/lib/docker/overlay2/80b54fffffd853e7ba8f14b1c1ac90a8b75fb31aafab2d53fe628cb592a95844/diff:/var/lib/docker/overlay2/02213d03e53450db4a2d492831eba720749d97435157430d240b760477b64c78/diff:/var/lib/docker/overlay2/e3727b5662aa5fdeeef9053112ad90fb2f9aaecbfeeddefa3efb066881ae1677/diff:/var/lib/docker/overlay2/685adc0695be0cb9862d43898ceae6e6a36c3cc98f04bc25e314797bed3b1d95/diff:/var/lib/docker/overlay2/7e133e132419c5ad6565f89b3ecfdf2c9fa038e5b9c39fe81c1269cfb6bb0d22/diff:/var/lib/docker/overlay2/c4d27ebf7e050a3aee0acccdadb92fc9390befadef2b0b13b9ebe87a2af3ef50/diff:/var/lib/docker/overlay2/0f07a86eba9c199451031724816d33cb5d2e19c401514edd8c1e392fd795f1e1/diff:/var/lib/docker/overlay2/a51cfe
8ee6145a30d356888e940bfdda67bc55c29f3972b35ae93dd989943b1c/diff:/var/lib/docker/overlay2/b155ac1a426201afe2af9fba8a7ebbecd3d8271f8613d0f53dac7bb190bc977f/diff:/var/lib/docker/overlay2/7c5cec64dde89a12b95bb1a0bca411b06b69201cfdb3cc4b46cb87a5bcff9a7f/diff:/var/lib/docker/overlay2/dd54bb055fc70a41daa3f3e950f4bdadd925db2c588d7d831edb4cbb176d30c7/diff:/var/lib/docker/overlay2/f58b39c756189e32d5b9c66b5c3861eabf5ab01ebc6179fec7210d414762bf45/diff:/var/lib/docker/overlay2/6458e00e4b79399a4860e78a572cd21fd47cbca2a54d189f34bd4a438145a6f5/diff:/var/lib/docker/overlay2/66427e9f49ff5383f9f819513857efb87ee3f880df33a86ac46ebc140ff172ed/diff:/var/lib/docker/overlay2/33f03d40d23c6a829c43633ba96c4058fbf09a4cf912eb51e0ca23a65574b0a7/diff:/var/lib/docker/overlay2/e68584e2b5a5a18fbd6edeeba6d80fe43e2199775b520878ca842d463078a2d1/diff:/var/lib/docker/overlay2/a2bfe134a89cb821f2c8e5ec6b42888d30fac6a9ed1aa4853476bb33cfe2e157/diff:/var/lib/docker/overlay2/f55951d7e041b300f9842916d51648285b79860a132d032d3c23b80af7c280fa/diff:/var/lib/d
ocker/overlay2/76cb0b8d6987165c472c0c9d54491045539294d203577a4ed7fac7f7cbbf0322/diff:/var/lib/docker/overlay2/a8f6d057d4938258302dd54e9a2e99732b4a2ac5c869366e93983e3e8890d432/diff:/var/lib/docker/overlay2/16bf4a461f9fe0edba90225f752527e534469b1bfbeb5bca6315512786340bfe/diff:/var/lib/docker/overlay2/2d022a51ddd598853537ff8fbeca5b94beff9d5d7e6ca81ffe011aa35121268a/diff:/var/lib/docker/overlay2/e30d56ebfba93be441f305b1938dd2d0f847f649922524ebef1fbe3e4b3b4bf9/diff:/var/lib/docker/overlay2/12df07bd2576a7b97f383aa3fcb2535f75a901953859063d9b65944d2dd0b152/diff:/var/lib/docker/overlay2/79e70748fe1267851a900b8bca2ab4e0b34e8163714fc440602d9e0273c93421/diff:/var/lib/docker/overlay2/c4fa6441d4ff7ce1be2072a8f61c5c495ff1785d9fee891191262b893a6eff63/diff:/var/lib/docker/overlay2/748980353d2fab0e6498a85b0c558d9eb7f34703302b21298c310b98dcf4d6f9/diff:/var/lib/docker/overlay2/48f823bc2f4741841d95ac4706f52fe9d01883bce998d5c999bdc363c838b1ee/diff:/var/lib/docker/overlay2/5f4f42c0e92359fc7ea2cf540120bd09407fd1d8dee5b56896919b39d3e
70033/diff:/var/lib/docker/overlay2/4a4066d1d0f42bb48af787d9f9bd115bacffde91f4ca8c20648dad3b25f904b6/diff:/var/lib/docker/overlay2/5f1054f553934c922e4dffc5c3804a5825ed249f7df9c3da31e2081145c8749a/diff:/var/lib/docker/overlay2/a6fe8ece465ba51837f6a88e28c3b571b632f0b223900278ac4a5f5dc0577520/diff:/var/lib/docker/overlay2/ee3e9af6d65fe9d2da423711b90ee171fd35422619c22b802d5fead4f861d921/diff:/var/lib/docker/overlay2/b353b985af8b2f665218f5af5e89cb642745824e2c3b51bfe3aa58c801823c46/diff:/var/lib/docker/overlay2/4411168ee372991c59d386d2ec200449c718a5343f5efa545ad9552a5c349310/diff:/var/lib/docker/overlay2/eeb668637d75a5802fe62d8a71458c68195302676ff09eb1e973d633e24e8588/diff:/var/lib/docker/overlay2/67b1dd580c0c0e994c4fe1233fef817d2c085438c80485c1f2eec64392c7b709/diff:/var/lib/docker/overlay2/1ae992d82b2e0a4c2a667c7d0d9e243efda7ee206e17c862bf093fa976667cc3/diff:/var/lib/docker/overlay2/ab6d393733a7abd2a9bd5612a0cef5adc3cded30c596c212828a8475c9c29779/diff:/var/lib/docker/overlay2/c927272ea82dc6bb318adcf8eb94099eece7af
9df7f454ff921048ba7ce589d2/diff:/var/lib/docker/overlay2/722309d1402eda210190af6c69b6f9998aff66e78e5bbc972ae865d10f0474d7/diff:/var/lib/docker/overlay2/c8a4e498ea2b5c051ced01db75d10e4ed1619bd3acc28c000789b600f8a7e23b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae66db941a82a4dcc1c9e80c74d4c39dab9942dd3d97ea5009111aa48d877925/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae66db941a82a4dcc1c9e80c74d4c39dab9942dd3d97ea5009111aa48d877925/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae66db941a82a4dcc1c9e80c74d4c39dab9942dd3d97ea5009111aa48d877925/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-021549",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-021549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-021549",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-021549",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-021549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "400faa231bb38db2b6d49952227d22ec8c7d0b8a69fabc39fcdb205468ef61ac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50531"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50532"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50533"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50534"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50530"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/400faa231bb3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-021549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8e6398a3dfeb",
	                        "ingress-addon-legacy-021549"
	                    ],
	                    "NetworkID": "e5fead1166ab11d820a609467101bc11fb8244827de41b475d2f9b86c8d12fbc",
	                    "EndpointID": "8263e5bf5d85b882b0b1be263ddb01ab07bb0563a69c86e259d3d8807ec391b3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-021549 -n ingress-addon-legacy-021549
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-021549 -n ingress-addon-legacy-021549: exit status 6 (391.09391ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 02:21:47.378704    5909 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-021549" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-021549" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-021549 addons enable ingress-dns --alsologtostderr -v=5
E0114 02:22:03.685216    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-021549 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.061606858s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 02:21:47.444209    5919 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:21:47.444778    5919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:21:47.444786    5919 out.go:309] Setting ErrFile to fd 2...
	I0114 02:21:47.444790    5919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:21:47.444903    5919 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 02:21:47.466872    5919 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0114 02:21:47.489493    5919 config.go:180] Loaded profile config "ingress-addon-legacy-021549": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0114 02:21:47.489525    5919 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-021549"
	I0114 02:21:47.489536    5919 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-021549"
	I0114 02:21:47.490134    5919 host.go:66] Checking if "ingress-addon-legacy-021549" exists ...
	I0114 02:21:47.491089    5919 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-021549 --format={{.State.Status}}
	I0114 02:21:47.570389    5919 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0114 02:21:47.592271    5919 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0114 02:21:47.614332    5919 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0114 02:21:47.614373    5919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0114 02:21:47.614578    5919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-021549
	I0114 02:21:47.672995    5919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50531 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/ingress-addon-legacy-021549/id_rsa Username:docker}
	I0114 02:21:47.765313    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:21:47.815717    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:47.815743    5919 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:48.092072    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:21:48.147303    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:48.147322    5919 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:48.687771    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:21:48.741446    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:48.741465    5919 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:49.398164    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:21:49.449209    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:49.449224    5919 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:50.241547    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:21:50.294731    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:50.294746    5919 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:51.465601    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:21:51.518938    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:51.518956    5919 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:53.774344    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:21:53.827310    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:53.827330    5919 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:55.438211    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:21:55.491377    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:55.491394    5919 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:58.296930    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:21:58.351360    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:21:58.351378    5919 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:22:02.176685    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:22:02.228974    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:22:02.228989    5919 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:22:09.928669    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:22:09.982746    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:22:09.982762    5919 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:22:24.620694    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:22:24.675406    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:22:24.675424    5919 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:22:53.083441    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:22:53.138365    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:22:53.138381    5919 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:23:16.308198    5919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0114 02:23:16.362333    5919 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0114 02:23:16.384118    5919 out.go:177] 
	W0114 02:23:16.405152    5919 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0114 02:23:16.405169    5919 out.go:239] * 
	* 
	W0114 02:23:16.407700    5919 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 02:23:16.429001    5919 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-021549
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-021549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554",
	        "Created": "2023-01-14T10:16:15.823624477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T10:16:16.109120728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554/hosts",
	        "LogPath": "/var/lib/docker/containers/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554-json.log",
	        "Name": "/ingress-addon-legacy-021549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-021549:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-021549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ae66db941a82a4dcc1c9e80c74d4c39dab9942dd3d97ea5009111aa48d877925-init/diff:/var/lib/docker/overlay2/74c9e0d36b5b0c73e7df7f4bce3bd0c3d02cf9dc383bffd6fbcff44769e0e62a/diff:/var/lib/docker/overlay2/ba601a6c163e2d067928a6364b090a9785c3dd2470d90823ce10e62a47aa569f/diff:/var/lib/docker/overlay2/80b54fffffd853e7ba8f14b1c1ac90a8b75fb31aafab2d53fe628cb592a95844/diff:/var/lib/docker/overlay2/02213d03e53450db4a2d492831eba720749d97435157430d240b760477b64c78/diff:/var/lib/docker/overlay2/e3727b5662aa5fdeeef9053112ad90fb2f9aaecbfeeddefa3efb066881ae1677/diff:/var/lib/docker/overlay2/685adc0695be0cb9862d43898ceae6e6a36c3cc98f04bc25e314797bed3b1d95/diff:/var/lib/docker/overlay2/7e133e132419c5ad6565f89b3ecfdf2c9fa038e5b9c39fe81c1269cfb6bb0d22/diff:/var/lib/docker/overlay2/c4d27ebf7e050a3aee0acccdadb92fc9390befadef2b0b13b9ebe87a2af3ef50/diff:/var/lib/docker/overlay2/0f07a86eba9c199451031724816d33cb5d2e19c401514edd8c1e392fd795f1e1/diff:/var/lib/docker/overlay2/a51cfe
8ee6145a30d356888e940bfdda67bc55c29f3972b35ae93dd989943b1c/diff:/var/lib/docker/overlay2/b155ac1a426201afe2af9fba8a7ebbecd3d8271f8613d0f53dac7bb190bc977f/diff:/var/lib/docker/overlay2/7c5cec64dde89a12b95bb1a0bca411b06b69201cfdb3cc4b46cb87a5bcff9a7f/diff:/var/lib/docker/overlay2/dd54bb055fc70a41daa3f3e950f4bdadd925db2c588d7d831edb4cbb176d30c7/diff:/var/lib/docker/overlay2/f58b39c756189e32d5b9c66b5c3861eabf5ab01ebc6179fec7210d414762bf45/diff:/var/lib/docker/overlay2/6458e00e4b79399a4860e78a572cd21fd47cbca2a54d189f34bd4a438145a6f5/diff:/var/lib/docker/overlay2/66427e9f49ff5383f9f819513857efb87ee3f880df33a86ac46ebc140ff172ed/diff:/var/lib/docker/overlay2/33f03d40d23c6a829c43633ba96c4058fbf09a4cf912eb51e0ca23a65574b0a7/diff:/var/lib/docker/overlay2/e68584e2b5a5a18fbd6edeeba6d80fe43e2199775b520878ca842d463078a2d1/diff:/var/lib/docker/overlay2/a2bfe134a89cb821f2c8e5ec6b42888d30fac6a9ed1aa4853476bb33cfe2e157/diff:/var/lib/docker/overlay2/f55951d7e041b300f9842916d51648285b79860a132d032d3c23b80af7c280fa/diff:/var/lib/d
ocker/overlay2/76cb0b8d6987165c472c0c9d54491045539294d203577a4ed7fac7f7cbbf0322/diff:/var/lib/docker/overlay2/a8f6d057d4938258302dd54e9a2e99732b4a2ac5c869366e93983e3e8890d432/diff:/var/lib/docker/overlay2/16bf4a461f9fe0edba90225f752527e534469b1bfbeb5bca6315512786340bfe/diff:/var/lib/docker/overlay2/2d022a51ddd598853537ff8fbeca5b94beff9d5d7e6ca81ffe011aa35121268a/diff:/var/lib/docker/overlay2/e30d56ebfba93be441f305b1938dd2d0f847f649922524ebef1fbe3e4b3b4bf9/diff:/var/lib/docker/overlay2/12df07bd2576a7b97f383aa3fcb2535f75a901953859063d9b65944d2dd0b152/diff:/var/lib/docker/overlay2/79e70748fe1267851a900b8bca2ab4e0b34e8163714fc440602d9e0273c93421/diff:/var/lib/docker/overlay2/c4fa6441d4ff7ce1be2072a8f61c5c495ff1785d9fee891191262b893a6eff63/diff:/var/lib/docker/overlay2/748980353d2fab0e6498a85b0c558d9eb7f34703302b21298c310b98dcf4d6f9/diff:/var/lib/docker/overlay2/48f823bc2f4741841d95ac4706f52fe9d01883bce998d5c999bdc363c838b1ee/diff:/var/lib/docker/overlay2/5f4f42c0e92359fc7ea2cf540120bd09407fd1d8dee5b56896919b39d3e
70033/diff:/var/lib/docker/overlay2/4a4066d1d0f42bb48af787d9f9bd115bacffde91f4ca8c20648dad3b25f904b6/diff:/var/lib/docker/overlay2/5f1054f553934c922e4dffc5c3804a5825ed249f7df9c3da31e2081145c8749a/diff:/var/lib/docker/overlay2/a6fe8ece465ba51837f6a88e28c3b571b632f0b223900278ac4a5f5dc0577520/diff:/var/lib/docker/overlay2/ee3e9af6d65fe9d2da423711b90ee171fd35422619c22b802d5fead4f861d921/diff:/var/lib/docker/overlay2/b353b985af8b2f665218f5af5e89cb642745824e2c3b51bfe3aa58c801823c46/diff:/var/lib/docker/overlay2/4411168ee372991c59d386d2ec200449c718a5343f5efa545ad9552a5c349310/diff:/var/lib/docker/overlay2/eeb668637d75a5802fe62d8a71458c68195302676ff09eb1e973d633e24e8588/diff:/var/lib/docker/overlay2/67b1dd580c0c0e994c4fe1233fef817d2c085438c80485c1f2eec64392c7b709/diff:/var/lib/docker/overlay2/1ae992d82b2e0a4c2a667c7d0d9e243efda7ee206e17c862bf093fa976667cc3/diff:/var/lib/docker/overlay2/ab6d393733a7abd2a9bd5612a0cef5adc3cded30c596c212828a8475c9c29779/diff:/var/lib/docker/overlay2/c927272ea82dc6bb318adcf8eb94099eece7af
9df7f454ff921048ba7ce589d2/diff:/var/lib/docker/overlay2/722309d1402eda210190af6c69b6f9998aff66e78e5bbc972ae865d10f0474d7/diff:/var/lib/docker/overlay2/c8a4e498ea2b5c051ced01db75d10e4ed1619bd3acc28c000789b600f8a7e23b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae66db941a82a4dcc1c9e80c74d4c39dab9942dd3d97ea5009111aa48d877925/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae66db941a82a4dcc1c9e80c74d4c39dab9942dd3d97ea5009111aa48d877925/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae66db941a82a4dcc1c9e80c74d4c39dab9942dd3d97ea5009111aa48d877925/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-021549",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-021549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-021549",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-021549",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-021549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "400faa231bb38db2b6d49952227d22ec8c7d0b8a69fabc39fcdb205468ef61ac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50531"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50532"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50533"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50534"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50530"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/400faa231bb3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-021549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8e6398a3dfeb",
	                        "ingress-addon-legacy-021549"
	                    ],
	                    "NetworkID": "e5fead1166ab11d820a609467101bc11fb8244827de41b475d2f9b86c8d12fbc",
	                    "EndpointID": "8263e5bf5d85b882b0b1be263ddb01ab07bb0563a69c86e259d3d8807ec391b3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-021549 -n ingress-addon-legacy-021549
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-021549 -n ingress-addon-legacy-021549: exit status 6 (392.001899ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 02:23:16.893209    6000 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-021549" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-021549" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-021549
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-021549:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554",
	        "Created": "2023-01-14T10:16:15.823624477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T10:16:16.109120728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554/hosts",
	        "LogPath": "/var/lib/docker/containers/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554/8e6398a3dfeb978aad00c652fc4183b64683980bf418f455e1b7cb0e8b73c554-json.log",
	        "Name": "/ingress-addon-legacy-021549",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-021549:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-021549",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ae66db941a82a4dcc1c9e80c74d4c39dab9942dd3d97ea5009111aa48d877925-init/diff:/var/lib/docker/overlay2/74c9e0d36b5b0c73e7df7f4bce3bd0c3d02cf9dc383bffd6fbcff44769e0e62a/diff:/var/lib/docker/overlay2/ba601a6c163e2d067928a6364b090a9785c3dd2470d90823ce10e62a47aa569f/diff:/var/lib/docker/overlay2/80b54fffffd853e7ba8f14b1c1ac90a8b75fb31aafab2d53fe628cb592a95844/diff:/var/lib/docker/overlay2/02213d03e53450db4a2d492831eba720749d97435157430d240b760477b64c78/diff:/var/lib/docker/overlay2/e3727b5662aa5fdeeef9053112ad90fb2f9aaecbfeeddefa3efb066881ae1677/diff:/var/lib/docker/overlay2/685adc0695be0cb9862d43898ceae6e6a36c3cc98f04bc25e314797bed3b1d95/diff:/var/lib/docker/overlay2/7e133e132419c5ad6565f89b3ecfdf2c9fa038e5b9c39fe81c1269cfb6bb0d22/diff:/var/lib/docker/overlay2/c4d27ebf7e050a3aee0acccdadb92fc9390befadef2b0b13b9ebe87a2af3ef50/diff:/var/lib/docker/overlay2/0f07a86eba9c199451031724816d33cb5d2e19c401514edd8c1e392fd795f1e1/diff:/var/lib/docker/overlay2/a51cfe
8ee6145a30d356888e940bfdda67bc55c29f3972b35ae93dd989943b1c/diff:/var/lib/docker/overlay2/b155ac1a426201afe2af9fba8a7ebbecd3d8271f8613d0f53dac7bb190bc977f/diff:/var/lib/docker/overlay2/7c5cec64dde89a12b95bb1a0bca411b06b69201cfdb3cc4b46cb87a5bcff9a7f/diff:/var/lib/docker/overlay2/dd54bb055fc70a41daa3f3e950f4bdadd925db2c588d7d831edb4cbb176d30c7/diff:/var/lib/docker/overlay2/f58b39c756189e32d5b9c66b5c3861eabf5ab01ebc6179fec7210d414762bf45/diff:/var/lib/docker/overlay2/6458e00e4b79399a4860e78a572cd21fd47cbca2a54d189f34bd4a438145a6f5/diff:/var/lib/docker/overlay2/66427e9f49ff5383f9f819513857efb87ee3f880df33a86ac46ebc140ff172ed/diff:/var/lib/docker/overlay2/33f03d40d23c6a829c43633ba96c4058fbf09a4cf912eb51e0ca23a65574b0a7/diff:/var/lib/docker/overlay2/e68584e2b5a5a18fbd6edeeba6d80fe43e2199775b520878ca842d463078a2d1/diff:/var/lib/docker/overlay2/a2bfe134a89cb821f2c8e5ec6b42888d30fac6a9ed1aa4853476bb33cfe2e157/diff:/var/lib/docker/overlay2/f55951d7e041b300f9842916d51648285b79860a132d032d3c23b80af7c280fa/diff:/var/lib/d
ocker/overlay2/76cb0b8d6987165c472c0c9d54491045539294d203577a4ed7fac7f7cbbf0322/diff:/var/lib/docker/overlay2/a8f6d057d4938258302dd54e9a2e99732b4a2ac5c869366e93983e3e8890d432/diff:/var/lib/docker/overlay2/16bf4a461f9fe0edba90225f752527e534469b1bfbeb5bca6315512786340bfe/diff:/var/lib/docker/overlay2/2d022a51ddd598853537ff8fbeca5b94beff9d5d7e6ca81ffe011aa35121268a/diff:/var/lib/docker/overlay2/e30d56ebfba93be441f305b1938dd2d0f847f649922524ebef1fbe3e4b3b4bf9/diff:/var/lib/docker/overlay2/12df07bd2576a7b97f383aa3fcb2535f75a901953859063d9b65944d2dd0b152/diff:/var/lib/docker/overlay2/79e70748fe1267851a900b8bca2ab4e0b34e8163714fc440602d9e0273c93421/diff:/var/lib/docker/overlay2/c4fa6441d4ff7ce1be2072a8f61c5c495ff1785d9fee891191262b893a6eff63/diff:/var/lib/docker/overlay2/748980353d2fab0e6498a85b0c558d9eb7f34703302b21298c310b98dcf4d6f9/diff:/var/lib/docker/overlay2/48f823bc2f4741841d95ac4706f52fe9d01883bce998d5c999bdc363c838b1ee/diff:/var/lib/docker/overlay2/5f4f42c0e92359fc7ea2cf540120bd09407fd1d8dee5b56896919b39d3e
70033/diff:/var/lib/docker/overlay2/4a4066d1d0f42bb48af787d9f9bd115bacffde91f4ca8c20648dad3b25f904b6/diff:/var/lib/docker/overlay2/5f1054f553934c922e4dffc5c3804a5825ed249f7df9c3da31e2081145c8749a/diff:/var/lib/docker/overlay2/a6fe8ece465ba51837f6a88e28c3b571b632f0b223900278ac4a5f5dc0577520/diff:/var/lib/docker/overlay2/ee3e9af6d65fe9d2da423711b90ee171fd35422619c22b802d5fead4f861d921/diff:/var/lib/docker/overlay2/b353b985af8b2f665218f5af5e89cb642745824e2c3b51bfe3aa58c801823c46/diff:/var/lib/docker/overlay2/4411168ee372991c59d386d2ec200449c718a5343f5efa545ad9552a5c349310/diff:/var/lib/docker/overlay2/eeb668637d75a5802fe62d8a71458c68195302676ff09eb1e973d633e24e8588/diff:/var/lib/docker/overlay2/67b1dd580c0c0e994c4fe1233fef817d2c085438c80485c1f2eec64392c7b709/diff:/var/lib/docker/overlay2/1ae992d82b2e0a4c2a667c7d0d9e243efda7ee206e17c862bf093fa976667cc3/diff:/var/lib/docker/overlay2/ab6d393733a7abd2a9bd5612a0cef5adc3cded30c596c212828a8475c9c29779/diff:/var/lib/docker/overlay2/c927272ea82dc6bb318adcf8eb94099eece7af
9df7f454ff921048ba7ce589d2/diff:/var/lib/docker/overlay2/722309d1402eda210190af6c69b6f9998aff66e78e5bbc972ae865d10f0474d7/diff:/var/lib/docker/overlay2/c8a4e498ea2b5c051ced01db75d10e4ed1619bd3acc28c000789b600f8a7e23b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae66db941a82a4dcc1c9e80c74d4c39dab9942dd3d97ea5009111aa48d877925/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae66db941a82a4dcc1c9e80c74d4c39dab9942dd3d97ea5009111aa48d877925/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae66db941a82a4dcc1c9e80c74d4c39dab9942dd3d97ea5009111aa48d877925/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-021549",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-021549/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-021549",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-021549",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-021549",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "400faa231bb38db2b6d49952227d22ec8c7d0b8a69fabc39fcdb205468ef61ac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50531"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50532"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50533"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50534"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "50530"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/400faa231bb3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-021549": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8e6398a3dfeb",
	                        "ingress-addon-legacy-021549"
	                    ],
	                    "NetworkID": "e5fead1166ab11d820a609467101bc11fb8244827de41b475d2f9b86c8d12fbc",
	                    "EndpointID": "8263e5bf5d85b882b0b1be263ddb01ab07bb0563a69c86e259d3d8807ec391b3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-021549 -n ingress-addon-legacy-021549
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-021549 -n ingress-addon-legacy-021549: exit status 6 (390.459788ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 02:23:17.341793    6012 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-021549" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-021549" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (219.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-022829
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-022829
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-022829: (36.541300531s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-022829 --wait=true -v=8 --alsologtostderr
E0114 02:33:59.148934    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:34:19.833131    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-022829 --wait=true -v=8 --alsologtostderr: exit status 80 (2m58.693541183s)

                                                
                                                
-- stdout --
	* [multinode-022829] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-022829 in cluster multinode-022829
	* Pulling base image ...
	* Restarting existing docker container for "multinode-022829" ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Starting worker node multinode-022829-m02 in cluster multinode-022829
	* Pulling base image ...
	* Restarting existing docker container for "multinode-022829-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.58.2
	* Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	  - env NO_PROXY=192.168.58.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 02:31:57.463292    9007 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:31:57.463546    9007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:31:57.463553    9007 out.go:309] Setting ErrFile to fd 2...
	I0114 02:31:57.463557    9007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:31:57.463695    9007 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 02:31:57.464204    9007 out.go:303] Setting JSON to false
	I0114 02:31:57.482895    9007 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1891,"bootTime":1673690426,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 02:31:57.483001    9007 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 02:31:57.505119    9007 out.go:177] * [multinode-022829] minikube v1.28.0 on Darwin 13.0.1
	I0114 02:31:57.526493    9007 notify.go:220] Checking for updates...
	I0114 02:31:57.547566    9007 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 02:31:57.569881    9007 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:31:57.591669    9007 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:31:57.634594    9007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:31:57.677621    9007 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	I0114 02:31:57.700699    9007 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:31:57.700818    9007 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 02:31:57.762971    9007 docker.go:138] docker version: linux-20.10.21
	I0114 02:31:57.763119    9007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:31:57.902048    9007 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-14 10:31:57.81235194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:31:57.946152    9007 out.go:177] * Using the docker driver based on existing profile
	I0114 02:31:57.974376    9007 start.go:294] selected driver: docker
	I0114 02:31:57.974404    9007 start.go:838] validating driver "docker" against &{Name:multinode-022829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-022829 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:31:57.974570    9007 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 02:31:57.974817    9007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:31:58.115544    9007 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-14 10:31:58.026014826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:31:58.117946    9007 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 02:31:58.117974    9007 cni.go:95] Creating CNI manager for ""
	I0114 02:31:58.117982    9007 cni.go:156] 3 nodes found, recommending kindnet
	I0114 02:31:58.118002    9007 start_flags.go:319] config:
	{Name:multinode-022829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-022829 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false
nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:31:58.159657    9007 out.go:177] * Starting control plane node multinode-022829 in cluster multinode-022829
	I0114 02:31:58.182909    9007 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 02:31:58.205599    9007 out.go:177] * Pulling base image ...
	I0114 02:31:58.263758    9007 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:31:58.263817    9007 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 02:31:58.263859    9007 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 02:31:58.263898    9007 cache.go:57] Caching tarball of preloaded images
	I0114 02:31:58.264082    9007 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 02:31:58.264102    9007 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 02:31:58.264865    9007 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/config.json ...
	I0114 02:31:58.321989    9007 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 02:31:58.322005    9007 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 02:31:58.322033    9007 cache.go:193] Successfully downloaded all kic artifacts
	I0114 02:31:58.322071    9007 start.go:364] acquiring machines lock for multinode-022829: {Name:mk7213570c70d360de889fa6f810478b8bc1fac4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 02:31:58.322163    9007 start.go:368] acquired machines lock for "multinode-022829" in 72.701µs
	I0114 02:31:58.322188    9007 start.go:96] Skipping create...Using existing machine configuration
	I0114 02:31:58.322200    9007 fix.go:55] fixHost starting: 
	I0114 02:31:58.322461    9007 cli_runner.go:164] Run: docker container inspect multinode-022829 --format={{.State.Status}}
	I0114 02:31:58.379755    9007 fix.go:103] recreateIfNeeded on multinode-022829: state=Stopped err=<nil>
	W0114 02:31:58.379785    9007 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 02:31:58.400551    9007 out.go:177] * Restarting existing docker container for "multinode-022829" ...
	I0114 02:31:58.444490    9007 cli_runner.go:164] Run: docker start multinode-022829
	I0114 02:31:58.764834    9007 cli_runner.go:164] Run: docker container inspect multinode-022829 --format={{.State.Status}}
	I0114 02:31:58.821807    9007 kic.go:426] container "multinode-022829" state is running.
	I0114 02:31:58.822437    9007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829
	I0114 02:31:58.884410    9007 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/config.json ...
	I0114 02:31:58.884931    9007 machine.go:88] provisioning docker machine ...
	I0114 02:31:58.884962    9007 ubuntu.go:169] provisioning hostname "multinode-022829"
	I0114 02:31:58.885067    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:31:58.951248    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:31:58.951449    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51423 <nil> <nil>}
	I0114 02:31:58.951462    9007 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-022829 && echo "multinode-022829" | sudo tee /etc/hostname
	I0114 02:31:59.118133    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-022829
	
	I0114 02:31:59.118294    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:31:59.178534    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:31:59.178700    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51423 <nil> <nil>}
	I0114 02:31:59.178720    9007 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-022829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-022829/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-022829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 02:31:59.294812    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 02:31:59.294842    9007 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
	I0114 02:31:59.294864    9007 ubuntu.go:177] setting up certificates
	I0114 02:31:59.294872    9007 provision.go:83] configureAuth start
	I0114 02:31:59.294959    9007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829
	I0114 02:31:59.354024    9007 provision.go:138] copyHostCerts
	I0114 02:31:59.354076    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 02:31:59.354145    9007 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
	I0114 02:31:59.354153    9007 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 02:31:59.354274    9007 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
	I0114 02:31:59.354450    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 02:31:59.354490    9007 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
	I0114 02:31:59.354495    9007 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 02:31:59.354571    9007 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
	I0114 02:31:59.354687    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 02:31:59.354726    9007 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
	I0114 02:31:59.354731    9007 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 02:31:59.354797    9007 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
	I0114 02:31:59.354913    9007 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.multinode-022829 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-022829]
	I0114 02:31:59.528999    9007 provision.go:172] copyRemoteCerts
	I0114 02:31:59.529076    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 02:31:59.529144    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:31:59.589255    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:31:59.676659    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0114 02:31:59.676763    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 02:31:59.695893    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0114 02:31:59.695988    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0114 02:31:59.718141    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0114 02:31:59.718243    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 02:31:59.736241    9007 provision.go:86] duration metric: configureAuth took 441.354539ms
	I0114 02:31:59.736264    9007 ubuntu.go:193] setting minikube options for container-runtime
	I0114 02:31:59.736526    9007 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:31:59.736664    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:31:59.796959    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:31:59.797124    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51423 <nil> <nil>}
	I0114 02:31:59.797134    9007 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 02:31:59.914493    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 02:31:59.914521    9007 ubuntu.go:71] root file system type: overlay
	I0114 02:31:59.914752    9007 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 02:31:59.914857    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:31:59.974871    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:31:59.975033    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51423 <nil> <nil>}
	I0114 02:31:59.975084    9007 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 02:32:00.100829    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 02:32:00.100945    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:00.157586    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:32:00.157737    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51423 <nil> <nil>}
	I0114 02:32:00.157750    9007 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 02:32:00.279682    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 02:32:00.279698    9007 machine.go:91] provisioned docker machine in 1.394754368s
	I0114 02:32:00.279708    9007 start.go:300] post-start starting for "multinode-022829" (driver="docker")
	I0114 02:32:00.279715    9007 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 02:32:00.279792    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 02:32:00.279858    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:00.335832    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:00.421877    9007 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 02:32:00.425589    9007 command_runner.go:130] > NAME="Ubuntu"
	I0114 02:32:00.425598    9007 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0114 02:32:00.425602    9007 command_runner.go:130] > ID=ubuntu
	I0114 02:32:00.425605    9007 command_runner.go:130] > ID_LIKE=debian
	I0114 02:32:00.425610    9007 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0114 02:32:00.425613    9007 command_runner.go:130] > VERSION_ID="20.04"
	I0114 02:32:00.425618    9007 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0114 02:32:00.425622    9007 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0114 02:32:00.425628    9007 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0114 02:32:00.425638    9007 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0114 02:32:00.425642    9007 command_runner.go:130] > VERSION_CODENAME=focal
	I0114 02:32:00.425647    9007 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0114 02:32:00.425693    9007 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 02:32:00.425704    9007 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 02:32:00.425711    9007 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 02:32:00.425718    9007 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 02:32:00.425725    9007 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
	I0114 02:32:00.425824    9007 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
	I0114 02:32:00.426005    9007 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
	I0114 02:32:00.426014    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> /etc/ssl/certs/27282.pem
	I0114 02:32:00.426224    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 02:32:00.433578    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:32:00.450457    9007 start.go:303] post-start completed in 170.738296ms
	I0114 02:32:00.450544    9007 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 02:32:00.450611    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:00.507359    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:00.591303    9007 command_runner.go:130] > 7%
	I0114 02:32:00.591397    9007 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 02:32:00.595621    9007 command_runner.go:130] > 91G
	I0114 02:32:00.595903    9007 fix.go:57] fixHost completed within 2.273699494s
	I0114 02:32:00.595914    9007 start.go:83] releasing machines lock for "multinode-022829", held for 2.273737486s
	I0114 02:32:00.596018    9007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829
	I0114 02:32:00.653920    9007 ssh_runner.go:195] Run: cat /version.json
	I0114 02:32:00.653928    9007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 02:32:00.654004    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:00.654004    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:00.713447    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:00.713589    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:00.796325    9007 command_runner.go:130] > {"iso_version": "v1.28.0-1668700269-15235", "kicbase_version": "v0.0.36-1668787669-15272", "minikube_version": "v1.28.0", "commit": "c883d3041e11322fb5c977f082b70bf31015848d"}
	I0114 02:32:00.851104    9007 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0114 02:32:00.853263    9007 ssh_runner.go:195] Run: systemctl --version
	I0114 02:32:00.858017    9007 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.18)
	I0114 02:32:00.858046    9007 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0114 02:32:00.858275    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0114 02:32:00.865671    9007 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0114 02:32:00.878291    9007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:32:00.944303    9007 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0114 02:32:01.028725    9007 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 02:32:01.038014    9007 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0114 02:32:01.038136    9007 command_runner.go:130] > [Unit]
	I0114 02:32:01.038145    9007 command_runner.go:130] > Description=Docker Application Container Engine
	I0114 02:32:01.038150    9007 command_runner.go:130] > Documentation=https://docs.docker.com
	I0114 02:32:01.038185    9007 command_runner.go:130] > BindsTo=containerd.service
	I0114 02:32:01.038193    9007 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0114 02:32:01.038198    9007 command_runner.go:130] > Wants=network-online.target
	I0114 02:32:01.038204    9007 command_runner.go:130] > Requires=docker.socket
	I0114 02:32:01.038210    9007 command_runner.go:130] > StartLimitBurst=3
	I0114 02:32:01.038221    9007 command_runner.go:130] > StartLimitIntervalSec=60
	I0114 02:32:01.038232    9007 command_runner.go:130] > [Service]
	I0114 02:32:01.038240    9007 command_runner.go:130] > Type=notify
	I0114 02:32:01.038246    9007 command_runner.go:130] > Restart=on-failure
	I0114 02:32:01.038254    9007 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0114 02:32:01.038268    9007 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0114 02:32:01.038278    9007 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0114 02:32:01.038283    9007 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0114 02:32:01.038290    9007 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0114 02:32:01.038295    9007 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0114 02:32:01.038302    9007 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0114 02:32:01.038314    9007 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0114 02:32:01.038320    9007 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0114 02:32:01.038323    9007 command_runner.go:130] > ExecStart=
	I0114 02:32:01.038335    9007 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0114 02:32:01.038340    9007 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0114 02:32:01.038347    9007 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0114 02:32:01.038353    9007 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0114 02:32:01.038356    9007 command_runner.go:130] > LimitNOFILE=infinity
	I0114 02:32:01.038360    9007 command_runner.go:130] > LimitNPROC=infinity
	I0114 02:32:01.038366    9007 command_runner.go:130] > LimitCORE=infinity
	I0114 02:32:01.038372    9007 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0114 02:32:01.038378    9007 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0114 02:32:01.038382    9007 command_runner.go:130] > TasksMax=infinity
	I0114 02:32:01.038385    9007 command_runner.go:130] > TimeoutStartSec=0
	I0114 02:32:01.038390    9007 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0114 02:32:01.038394    9007 command_runner.go:130] > Delegate=yes
	I0114 02:32:01.038410    9007 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0114 02:32:01.038416    9007 command_runner.go:130] > KillMode=process
	I0114 02:32:01.038429    9007 command_runner.go:130] > [Install]
	I0114 02:32:01.038434    9007 command_runner.go:130] > WantedBy=multi-user.target
	I0114 02:32:01.038916    9007 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 02:32:01.038985    9007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 02:32:01.048432    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 02:32:01.060759    9007 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0114 02:32:01.060770    9007 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0114 02:32:01.061514    9007 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 02:32:01.126219    9007 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 02:32:01.195178    9007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:32:01.263390    9007 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 02:32:01.507993    9007 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0114 02:32:01.576870    9007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:32:01.646399    9007 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0114 02:32:01.656079    9007 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 02:32:01.656162    9007 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0114 02:32:01.659969    9007 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0114 02:32:01.659981    9007 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0114 02:32:01.659987    9007 command_runner.go:130] > Device: 96h/150d	Inode: 117         Links: 1
	I0114 02:32:01.659992    9007 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0114 02:32:01.660002    9007 command_runner.go:130] > Access: 2023-01-14 10:32:00.951712281 +0000
	I0114 02:32:01.660009    9007 command_runner.go:130] > Modify: 2023-01-14 10:32:00.951712281 +0000
	I0114 02:32:01.660017    9007 command_runner.go:130] > Change: 2023-01-14 10:32:00.952712281 +0000
	I0114 02:32:01.660023    9007 command_runner.go:130] >  Birth: -
	I0114 02:32:01.660059    9007 start.go:472] Will wait 60s for crictl version
	I0114 02:32:01.660107    9007 ssh_runner.go:195] Run: which crictl
	I0114 02:32:01.663599    9007 command_runner.go:130] > /usr/bin/crictl
	I0114 02:32:01.663674    9007 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 02:32:01.692039    9007 command_runner.go:130] > Version:  0.1.0
	I0114 02:32:01.692054    9007 command_runner.go:130] > RuntimeName:  docker
	I0114 02:32:01.692059    9007 command_runner.go:130] > RuntimeVersion:  20.10.21
	I0114 02:32:01.692063    9007 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0114 02:32:01.694096    9007 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0114 02:32:01.694188    9007 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:32:01.721427    9007 command_runner.go:130] > 20.10.21
	I0114 02:32:01.723692    9007 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:32:01.749375    9007 command_runner.go:130] > 20.10.21
	I0114 02:32:01.797214    9007 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0114 02:32:01.797476    9007 cli_runner.go:164] Run: docker exec -t multinode-022829 dig +short host.docker.internal
	I0114 02:32:01.912460    9007 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 02:32:01.912592    9007 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 02:32:01.916872    9007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:32:01.926568    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:01.983057    9007 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:32:01.983146    9007 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 02:32:02.004708    9007 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I0114 02:32:02.004722    9007 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I0114 02:32:02.004728    9007 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I0114 02:32:02.004734    9007 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I0114 02:32:02.004738    9007 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0114 02:32:02.004742    9007 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0114 02:32:02.004747    9007 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0114 02:32:02.004759    9007 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0114 02:32:02.004763    9007 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0114 02:32:02.004768    9007 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 02:32:02.004772    9007 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0114 02:32:02.006811    9007 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0114 02:32:02.006828    9007 docker.go:543] Images already preloaded, skipping extraction
	I0114 02:32:02.006917    9007 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 02:32:02.028462    9007 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I0114 02:32:02.028480    9007 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I0114 02:32:02.028484    9007 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I0114 02:32:02.028490    9007 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I0114 02:32:02.028494    9007 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0114 02:32:02.028499    9007 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0114 02:32:02.028505    9007 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0114 02:32:02.028510    9007 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0114 02:32:02.028516    9007 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0114 02:32:02.028520    9007 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 02:32:02.028524    9007 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0114 02:32:02.030492    9007 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0114 02:32:02.030510    9007 cache_images.go:84] Images are preloaded, skipping loading
	I0114 02:32:02.030604    9007 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 02:32:02.099895    9007 command_runner.go:130] > systemd
	I0114 02:32:02.102214    9007 cni.go:95] Creating CNI manager for ""
	I0114 02:32:02.102227    9007 cni.go:156] 3 nodes found, recommending kindnet
	I0114 02:32:02.102247    9007 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 02:32:02.102270    9007 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-022829 NodeName:multinode-022829 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 02:32:02.102392    9007 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-022829"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 02:32:02.102473    9007 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-022829 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-022829 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 02:32:02.102544    9007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 02:32:02.110081    9007 command_runner.go:130] > kubeadm
	I0114 02:32:02.110094    9007 command_runner.go:130] > kubectl
	I0114 02:32:02.110098    9007 command_runner.go:130] > kubelet
	I0114 02:32:02.110759    9007 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 02:32:02.110819    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 02:32:02.118050    9007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (478 bytes)
	I0114 02:32:02.130652    9007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 02:32:02.143152    9007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2038 bytes)
	I0114 02:32:02.156163    9007 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0114 02:32:02.160145    9007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:32:02.169777    9007 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829 for IP: 192.168.58.2
	I0114 02:32:02.169903    9007 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
	I0114 02:32:02.169967    9007 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
	I0114 02:32:02.170060    9007 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.key
	I0114 02:32:02.170133    9007 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/apiserver.key.cee25041
	I0114 02:32:02.170197    9007 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/proxy-client.key
	I0114 02:32:02.170206    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0114 02:32:02.170240    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0114 02:32:02.170286    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0114 02:32:02.170313    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0114 02:32:02.170335    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0114 02:32:02.170358    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0114 02:32:02.170379    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0114 02:32:02.170401    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0114 02:32:02.170505    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
	W0114 02:32:02.170552    9007 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
	I0114 02:32:02.170568    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 02:32:02.170606    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
	I0114 02:32:02.170646    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
	I0114 02:32:02.170683    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
	I0114 02:32:02.170760    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:32:02.170795    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem -> /usr/share/ca-certificates/2728.pem
	I0114 02:32:02.170819    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> /usr/share/ca-certificates/27282.pem
	I0114 02:32:02.170840    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:02.171314    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 02:32:02.188549    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0114 02:32:02.205624    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 02:32:02.223088    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0114 02:32:02.240397    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 02:32:02.257233    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 02:32:02.274184    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 02:32:02.291418    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 02:32:02.308716    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
	I0114 02:32:02.325976    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
	I0114 02:32:02.343388    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 02:32:02.360749    9007 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 02:32:02.373779    9007 ssh_runner.go:195] Run: openssl version
	I0114 02:32:02.379031    9007 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0114 02:32:02.379294    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 02:32:02.387492    9007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:02.391289    9007 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:02.391348    9007 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:02.391406    9007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:02.396338    9007 command_runner.go:130] > b5213941
	I0114 02:32:02.396658    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 02:32:02.404197    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
	I0114 02:32:02.412245    9007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
	I0114 02:32:02.416188    9007 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 02:32:02.416313    9007 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 02:32:02.416362    9007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
	I0114 02:32:02.422101    9007 command_runner.go:130] > 51391683
	I0114 02:32:02.422158    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
	I0114 02:32:02.430041    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
	I0114 02:32:02.438087    9007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
	I0114 02:32:02.442012    9007 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 02:32:02.442132    9007 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 02:32:02.442182    9007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
	I0114 02:32:02.447342    9007 command_runner.go:130] > 3ec20f2e
	I0114 02:32:02.447713    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 02:32:02.455050    9007 kubeadm.go:396] StartCluster: {Name:multinode-022829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-022829 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:fals
e metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:32:02.455190    9007 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 02:32:02.477968    9007 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 02:32:02.492839    9007 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0114 02:32:02.492849    9007 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0114 02:32:02.492853    9007 command_runner.go:130] > /var/lib/minikube/etcd:
	I0114 02:32:02.492856    9007 command_runner.go:130] > member
	I0114 02:32:02.493474    9007 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 02:32:02.493491    9007 kubeadm.go:627] restartCluster start
	I0114 02:32:02.493551    9007 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 02:32:02.500525    9007 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:02.500605    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:02.558787    9007 kubeconfig.go:135] verify returned: extract IP: "multinode-022829" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:32:02.558877    9007 kubeconfig.go:146] "multinode-022829" context is missing from /Users/jenkins/minikube-integration/15642-1559/kubeconfig - will repair!
	I0114 02:32:02.559115    9007 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/kubeconfig: {Name:mkb6d1db5780815291441dc67b348461b9325651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:32:02.559621    9007 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:32:02.559849    9007 kapi.go:59] client config for multinode-022829: &rest.Config{Host:"https://127.0.0.1:51427", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 02:32:02.560211    9007 cert_rotation.go:137] Starting client certificate rotation controller
	I0114 02:32:02.560439    9007 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 02:32:02.568349    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:02.568421    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:02.576901    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:02.778966    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:02.779112    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:02.790114    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:02.979005    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:02.979221    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:02.990221    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:03.178583    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:03.178707    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:03.189738    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:03.379068    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:03.379257    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:03.390013    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:03.579061    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:03.579258    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:03.590144    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:03.779045    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:03.779211    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:03.790143    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:03.979018    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:03.979177    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:03.990176    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:04.177580    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:04.177785    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:04.188986    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:04.378536    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:04.378692    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:04.389475    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:04.578999    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:04.579157    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:04.590054    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:04.778876    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:04.779061    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:04.790083    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:04.977094    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:04.977269    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:04.987455    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.178484    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:05.178636    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:05.189676    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.378821    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:05.378945    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:05.390083    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.579054    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:05.579232    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:05.590456    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.590467    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:05.590523    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:05.598809    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.598822    9007 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0114 02:32:05.598832    9007 kubeadm.go:1114] stopping kube-system containers ...
	I0114 02:32:05.598910    9007 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 02:32:05.620895    9007 command_runner.go:130] > 22dfc551af5e
	I0114 02:32:05.620910    9007 command_runner.go:130] > 2b17c5d2929a
	I0114 02:32:05.620915    9007 command_runner.go:130] > 4b8fe186dcad
	I0114 02:32:05.620919    9007 command_runner.go:130] > 2d0bd2f67f63
	I0114 02:32:05.620923    9007 command_runner.go:130] > 85252b069649
	I0114 02:32:05.620929    9007 command_runner.go:130] > ed7a47472cbc
	I0114 02:32:05.620934    9007 command_runner.go:130] > ed5ada705cee
	I0114 02:32:05.620938    9007 command_runner.go:130] > a7ee261cbfc6
	I0114 02:32:05.620942    9007 command_runner.go:130] > d3ae0d142c8f
	I0114 02:32:05.620946    9007 command_runner.go:130] > 9048785f4e90
	I0114 02:32:05.620950    9007 command_runner.go:130] > 516991d5f2e5
	I0114 02:32:05.620953    9007 command_runner.go:130] > 22eb2357fc11
	I0114 02:32:05.620957    9007 command_runner.go:130] > 32c139aa3617
	I0114 02:32:05.620962    9007 command_runner.go:130] > 88473b6a518e
	I0114 02:32:05.620966    9007 command_runner.go:130] > 037848e173d9
	I0114 02:32:05.620969    9007 command_runner.go:130] > 2da5274a0541
	I0114 02:32:05.623085    9007 docker.go:444] Stopping containers: [22dfc551af5e 2b17c5d2929a 4b8fe186dcad 2d0bd2f67f63 85252b069649 ed7a47472cbc ed5ada705cee a7ee261cbfc6 d3ae0d142c8f 9048785f4e90 516991d5f2e5 22eb2357fc11 32c139aa3617 88473b6a518e 037848e173d9 2da5274a0541]
	I0114 02:32:05.623181    9007 ssh_runner.go:195] Run: docker stop 22dfc551af5e 2b17c5d2929a 4b8fe186dcad 2d0bd2f67f63 85252b069649 ed7a47472cbc ed5ada705cee a7ee261cbfc6 d3ae0d142c8f 9048785f4e90 516991d5f2e5 22eb2357fc11 32c139aa3617 88473b6a518e 037848e173d9 2da5274a0541
	I0114 02:32:05.643649    9007 command_runner.go:130] > 22dfc551af5e
	I0114 02:32:05.643665    9007 command_runner.go:130] > 2b17c5d2929a
	I0114 02:32:05.643833    9007 command_runner.go:130] > 4b8fe186dcad
	I0114 02:32:05.643877    9007 command_runner.go:130] > 2d0bd2f67f63
	I0114 02:32:05.645160    9007 command_runner.go:130] > 85252b069649
	I0114 02:32:05.645170    9007 command_runner.go:130] > ed7a47472cbc
	I0114 02:32:05.645175    9007 command_runner.go:130] > ed5ada705cee
	I0114 02:32:05.645181    9007 command_runner.go:130] > a7ee261cbfc6
	I0114 02:32:05.645416    9007 command_runner.go:130] > d3ae0d142c8f
	I0114 02:32:05.645433    9007 command_runner.go:130] > 9048785f4e90
	I0114 02:32:05.645438    9007 command_runner.go:130] > 516991d5f2e5
	I0114 02:32:05.645441    9007 command_runner.go:130] > 22eb2357fc11
	I0114 02:32:05.645445    9007 command_runner.go:130] > 32c139aa3617
	I0114 02:32:05.645455    9007 command_runner.go:130] > 88473b6a518e
	I0114 02:32:05.645460    9007 command_runner.go:130] > 037848e173d9
	I0114 02:32:05.645466    9007 command_runner.go:130] > 2da5274a0541
	I0114 02:32:05.647912    9007 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 02:32:05.658263    9007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 02:32:05.665717    9007 command_runner.go:130] > -rw------- 1 root root 5639 Jan 14 10:28 /etc/kubernetes/admin.conf
	I0114 02:32:05.665729    9007 command_runner.go:130] > -rw------- 1 root root 5652 Jan 14 10:28 /etc/kubernetes/controller-manager.conf
	I0114 02:32:05.665735    9007 command_runner.go:130] > -rw------- 1 root root 2003 Jan 14 10:28 /etc/kubernetes/kubelet.conf
	I0114 02:32:05.665746    9007 command_runner.go:130] > -rw------- 1 root root 5600 Jan 14 10:28 /etc/kubernetes/scheduler.conf
	I0114 02:32:05.666477    9007 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan 14 10:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 14 10:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Jan 14 10:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan 14 10:28 /etc/kubernetes/scheduler.conf
	
	I0114 02:32:05.666551    9007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 02:32:05.673232    9007 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0114 02:32:05.673937    9007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 02:32:05.680705    9007 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0114 02:32:05.681415    9007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 02:32:05.688663    9007 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.688721    9007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0114 02:32:05.695729    9007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 02:32:05.703047    9007 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.703108    9007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0114 02:32:05.710086    9007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 02:32:05.717668    9007 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 02:32:05.717679    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:32:05.760869    9007 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 02:32:05.760884    9007 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0114 02:32:05.761142    9007 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0114 02:32:05.761326    9007 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 02:32:05.761651    9007 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0114 02:32:05.761821    9007 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0114 02:32:05.762185    9007 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0114 02:32:05.762198    9007 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0114 02:32:05.762578    9007 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0114 02:32:05.762935    9007 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 02:32:05.763052    9007 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 02:32:05.763059    9007 command_runner.go:130] > [certs] Using the existing "sa" key
	I0114 02:32:05.766233    9007 command_runner.go:130] ! W0114 10:32:05.756257    1173 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:05.766254    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:32:05.809504    9007 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 02:32:06.068011    9007 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0114 02:32:06.200991    9007 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0114 02:32:06.467532    9007 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 02:32:06.554303    9007 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 02:32:06.557987    9007 command_runner.go:130] ! W0114 10:32:05.804795    1183 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:06.558009    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:32:06.611817    9007 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 02:32:06.612446    9007 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 02:32:06.612456    9007 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0114 02:32:06.687580    9007 command_runner.go:130] ! W0114 10:32:06.597122    1205 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:06.687601    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:32:06.730619    9007 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 02:32:06.730634    9007 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 02:32:06.732486    9007 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 02:32:06.733461    9007 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 02:32:06.738948    9007 command_runner.go:130] ! W0114 10:32:06.725493    1239 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:06.738970    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:32:06.831207    9007 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 02:32:06.835831    9007 command_runner.go:130] ! W0114 10:32:06.826277    1255 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:06.835863    9007 api_server.go:51] waiting for apiserver process to appear ...
	I0114 02:32:06.835933    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:32:07.347441    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:32:07.845931    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:32:07.857137    9007 command_runner.go:130] > 1733
	I0114 02:32:07.857172    9007 api_server.go:71] duration metric: took 1.021311066s to wait for apiserver process to appear ...
	I0114 02:32:07.857182    9007 api_server.go:87] waiting for apiserver healthz status ...
	I0114 02:32:07.857195    9007 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51427/healthz ...
	I0114 02:32:07.858851    9007 api_server.go:268] stopped: https://127.0.0.1:51427/healthz: Get "https://127.0.0.1:51427/healthz": EOF
	I0114 02:32:08.358954    9007 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51427/healthz ...
	I0114 02:32:10.934069    9007 api_server.go:278] https://127.0.0.1:51427/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0114 02:32:10.934083    9007 api_server.go:102] status: https://127.0.0.1:51427/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0114 02:32:11.360467    9007 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51427/healthz ...
	I0114 02:32:11.368724    9007 api_server.go:278] https://127.0.0.1:51427/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 02:32:11.368738    9007 api_server.go:102] status: https://127.0.0.1:51427/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 02:32:11.860518    9007 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51427/healthz ...
	I0114 02:32:11.867742    9007 api_server.go:278] https://127.0.0.1:51427/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 02:32:11.867757    9007 api_server.go:102] status: https://127.0.0.1:51427/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 02:32:12.360376    9007 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51427/healthz ...
	I0114 02:32:12.368106    9007 api_server.go:278] https://127.0.0.1:51427/healthz returned 200:
	ok
	I0114 02:32:12.368162    9007 round_trippers.go:463] GET https://127.0.0.1:51427/version
	I0114 02:32:12.368168    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:12.368177    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:12.368183    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:12.374153    9007 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0114 02:32:12.374163    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:12.374169    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:12.374173    9007 round_trippers.go:580]     Content-Length: 263
	I0114 02:32:12.374179    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:12 GMT
	I0114 02:32:12.374183    9007 round_trippers.go:580]     Audit-Id: 5896713a-e9ab-4b3d-b23b-4895d0f821f3
	I0114 02:32:12.374189    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:12.374194    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:12.374199    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:12.374220    9007 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0114 02:32:12.374268    9007 api_server.go:140] control plane version: v1.25.3
	I0114 02:32:12.374276    9007 api_server.go:130] duration metric: took 4.517078938s to wait for apiserver health ...
	I0114 02:32:12.374282    9007 cni.go:95] Creating CNI manager for ""
	I0114 02:32:12.374287    9007 cni.go:156] 3 nodes found, recommending kindnet
	I0114 02:32:12.412445    9007 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0114 02:32:12.433914    9007 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0114 02:32:12.440421    9007 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0114 02:32:12.440434    9007 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0114 02:32:12.440440    9007 command_runner.go:130] > Device: 8eh/142d	Inode: 1184766     Links: 1
	I0114 02:32:12.440445    9007 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 02:32:12.440450    9007 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0114 02:32:12.440454    9007 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0114 02:32:12.440458    9007 command_runner.go:130] > Change: 2023-01-14 10:06:30.247481244 +0000
	I0114 02:32:12.440466    9007 command_runner.go:130] >  Birth: -
	I0114 02:32:12.440512    9007 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0114 02:32:12.440518    9007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0114 02:32:12.454524    9007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 02:32:13.447772    9007 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0114 02:32:13.449556    9007 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0114 02:32:13.452148    9007 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0114 02:32:13.527275    9007 command_runner.go:130] > daemonset.apps/kindnet configured
	I0114 02:32:13.537707    9007 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.083156211s)
	I0114 02:32:13.537742    9007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 02:32:13.537844    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:13.537854    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:13.537864    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:13.537873    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:13.542700    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:13.542720    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:13.542728    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:13.542736    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:13.542748    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:13 GMT
	I0114 02:32:13.542770    9007 round_trippers.go:580]     Audit-Id: 46d51b2d-b1cd-41d4-9031-c062220a458a
	I0114 02:32:13.542802    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:13.542813    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:13.544298    9007 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"719"},"items":[{"metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"415","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84181 chars]
	I0114 02:32:13.547374    9007 system_pods.go:59] 12 kube-system pods found
	I0114 02:32:13.547393    9007 system_pods.go:61] "coredns-565d847f94-xg88j" [8ba9cbef-253e-46ad-aa78-55875dc5939b] Running
	I0114 02:32:13.547399    9007 system_pods.go:61] "etcd-multinode-022829" [f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 02:32:13.547403    9007 system_pods.go:61] "kindnet-2ffw5" [6e2e34df-4259-4f9d-a1d8-b7c33a252211] Running
	I0114 02:32:13.547406    9007 system_pods.go:61] "kindnet-crlwb" [129cddf5-10fc-4467-ab9d-d9a47d195213] Running
	I0114 02:32:13.547410    9007 system_pods.go:61] "kindnet-pqh2t" [cb280495-e617-461c-a259-e28b47f301d6] Running
	I0114 02:32:13.547414    9007 system_pods.go:61] "kube-apiserver-multinode-022829" [b153813e-4767-4643-9cc4-ab5c1f8a2441] Running
	I0114 02:32:13.547420    9007 system_pods.go:61] "kube-controller-manager-multinode-022829" [3ecd3fea-11b6-4dd0-9ac1-200f293b0e22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0114 02:32:13.547424    9007 system_pods.go:61] "kube-proxy-6bgqj" [330a14fa-1ce0-4857-81a1-2988087382d4] Running
	I0114 02:32:13.547428    9007 system_pods.go:61] "kube-proxy-7p92j" [abe462b8-5607-4e29-b040-12678d7ec756] Running
	I0114 02:32:13.547432    9007 system_pods.go:61] "kube-proxy-pplrc" [f6acf6b8-0d1e-4694-85de-f70fb0bcfee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0114 02:32:13.547437    9007 system_pods.go:61] "kube-scheduler-multinode-022829" [dec76631-6f7c-433f-87e4-2d0c847b6f29] Running
	I0114 02:32:13.547441    9007 system_pods.go:61] "storage-provisioner" [29960f5b-1391-43dd-9ebb-93c76a894fa2] Running
	I0114 02:32:13.547445    9007 system_pods.go:74] duration metric: took 9.69295ms to wait for pod list to return data ...
	I0114 02:32:13.547452    9007 node_conditions.go:102] verifying NodePressure condition ...
	I0114 02:32:13.547492    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes
	I0114 02:32:13.547497    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:13.547503    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:13.547509    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:13.550581    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:13.550595    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:13.550601    9007 round_trippers.go:580]     Audit-Id: 9021e032-5f34-4678-9320-d3b8bfded768
	I0114 02:32:13.550606    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:13.550611    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:13.550616    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:13.550621    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:13.550625    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:13 GMT
	I0114 02:32:13.550802    9007 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"719"},"items":[{"metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16143 chars]
	I0114 02:32:13.551427    9007 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:32:13.551441    9007 node_conditions.go:123] node cpu capacity is 6
	I0114 02:32:13.551450    9007 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:32:13.551453    9007 node_conditions.go:123] node cpu capacity is 6
	I0114 02:32:13.551458    9007 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:32:13.551461    9007 node_conditions.go:123] node cpu capacity is 6
	I0114 02:32:13.551467    9007 node_conditions.go:105] duration metric: took 4.012324ms to run NodePressure ...
	I0114 02:32:13.551480    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:32:13.844983    9007 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0114 02:32:13.948973    9007 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0114 02:32:13.952425    9007 command_runner.go:130] ! W0114 10:32:13.658132    2437 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:13.952459    9007 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0114 02:32:13.952523    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0114 02:32:13.952532    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:13.952541    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:13.952549    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:13.956283    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:13.956302    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:13.956310    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:13.956316    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:13 GMT
	I0114 02:32:13.956322    9007 round_trippers.go:580]     Audit-Id: a7d51ca1-f62e-48ff-b775-d61ae92021ad
	I0114 02:32:13.956329    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:13.956334    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:13.956339    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:13.956568    9007 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"724"},"items":[{"metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30919 chars]
	I0114 02:32:13.957509    9007 kubeadm.go:778] kubelet initialised
	I0114 02:32:13.957522    9007 kubeadm.go:779] duration metric: took 5.053909ms waiting for restarted kubelet to initialise ...
	I0114 02:32:13.957529    9007 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 02:32:13.957574    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:13.957580    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:13.957587    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:13.957595    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.019403    9007 round_trippers.go:574] Response Status: 200 OK in 61 milliseconds
	I0114 02:32:14.019437    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.019452    9007 round_trippers.go:580]     Audit-Id: 2b678049-9fa9-4318-a6e1-2bf3b099cbb1
	I0114 02:32:14.019467    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.019476    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.019481    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.019486    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.019491    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.021078    9007 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"724"},"items":[{"metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"415","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84632 chars]
	I0114 02:32:14.023177    9007 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-xg88j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:14.023233    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/coredns-565d847f94-xg88j
	I0114 02:32:14.023244    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:14.023252    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.023257    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:14.026394    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:14.026409    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.026419    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.026425    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.026431    9007 round_trippers.go:580]     Audit-Id: 4204a8d1-cc18-4161-b8ff-75ad78931a9a
	I0114 02:32:14.026436    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.026441    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.026446    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.026506    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"415","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6343 chars]
	I0114 02:32:14.026782    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:14.026788    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:14.026795    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.026800    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:14.029356    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:14.029368    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.029375    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.029380    9007 round_trippers.go:580]     Audit-Id: a54c14b9-c397-4da1-9756-cd33fcc66791
	I0114 02:32:14.029385    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.029390    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.029395    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.029400    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.029457    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:14.029649    9007 pod_ready.go:92] pod "coredns-565d847f94-xg88j" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:14.029655    9007 pod_ready.go:81] duration metric: took 6.465458ms waiting for pod "coredns-565d847f94-xg88j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:14.029662    9007 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:14.029690    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:14.029695    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:14.029701    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.029708    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:14.032046    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:14.032056    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.032062    9007 round_trippers.go:580]     Audit-Id: 0d5ef7ae-aa62-48de-b92a-01e09f0eb750
	I0114 02:32:14.032067    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.032072    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.032077    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.032082    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.032087    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.032258    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:14.032493    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:14.032500    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:14.032506    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.032512    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:14.034705    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:14.034714    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.034720    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.034726    9007 round_trippers.go:580]     Audit-Id: f9e49e86-cb93-4d33-92cd-0ad5c27a5bb4
	I0114 02:32:14.034734    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.034739    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.034744    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.034748    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.034808    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:14.535351    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:14.535364    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:14.535371    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.535376    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:14.538562    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:14.538576    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.538592    9007 round_trippers.go:580]     Audit-Id: d0de57d6-f113-47aa-b915-e91c98eaca6f
	I0114 02:32:14.538598    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.538605    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.538611    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.538615    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.538621    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.538928    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:14.539204    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:14.539213    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:14.539220    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.539225    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:14.541382    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:14.541393    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.541398    9007 round_trippers.go:580]     Audit-Id: ff495af3-2e24-419f-8058-93577f6e8b23
	I0114 02:32:14.541403    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.541409    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.541414    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.541420    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.541424    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.541485    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:15.035495    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:15.035515    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:15.035525    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:15.035533    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:15.038632    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:15.038652    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:15.038660    9007 round_trippers.go:580]     Audit-Id: f2c06345-381c-4af2-84cd-11f6f9c43967
	I0114 02:32:15.038665    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:15.038670    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:15.038674    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:15.038678    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:15.038683    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:15 GMT
	I0114 02:32:15.038742    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:15.039027    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:15.039034    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:15.039039    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:15.039045    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:15.041117    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:15.041129    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:15.041134    9007 round_trippers.go:580]     Audit-Id: 920c4cd5-5b2b-4e06-a7dc-cc7ef0d28a35
	I0114 02:32:15.041140    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:15.041145    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:15.041149    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:15.041155    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:15.041159    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:15 GMT
	I0114 02:32:15.041260    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:15.535888    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:15.535907    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:15.535920    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:15.535930    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:15.539553    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:15.539567    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:15.539575    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:15 GMT
	I0114 02:32:15.539582    9007 round_trippers.go:580]     Audit-Id: cf0865a0-4e3d-46f4-b169-bf203962820c
	I0114 02:32:15.539603    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:15.539608    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:15.539615    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:15.539626    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:15.539736    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:15.539996    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:15.540002    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:15.540008    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:15.540013    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:15.542235    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:15.542244    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:15.542250    9007 round_trippers.go:580]     Audit-Id: 53ee419d-354a-498b-bfcb-1365fca3aecf
	I0114 02:32:15.542254    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:15.542260    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:15.542265    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:15.542270    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:15.542275    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:15 GMT
	I0114 02:32:15.542335    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:16.037129    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:16.037150    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:16.037163    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:16.037173    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:16.041251    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:16.041263    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:16.041302    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:16.041315    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:16 GMT
	I0114 02:32:16.041324    9007 round_trippers.go:580]     Audit-Id: 590bc0a0-3327-4208-900e-b0fdfc7c0822
	I0114 02:32:16.041330    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:16.041336    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:16.041341    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:16.041407    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:16.041674    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:16.041682    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:16.041688    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:16.041693    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:16.044367    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:16.044381    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:16.044387    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:16.044393    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:16 GMT
	I0114 02:32:16.044401    9007 round_trippers.go:580]     Audit-Id: 52a54fe8-4eca-4388-8e92-adc98141a52b
	I0114 02:32:16.044406    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:16.044416    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:16.044421    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:16.044497    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:16.044714    9007 pod_ready.go:102] pod "etcd-multinode-022829" in "kube-system" namespace has status "Ready":"False"
	I0114 02:32:16.537177    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:16.537210    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:16.537254    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:16.537307    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:16.541370    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:16.541388    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:16.541397    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:16.541405    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:16.541413    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:16.541417    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:16 GMT
	I0114 02:32:16.541422    9007 round_trippers.go:580]     Audit-Id: 7fb1e0d9-9113-4529-b3e5-0f0e31915db9
	I0114 02:32:16.541429    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:16.541753    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:16.542009    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:16.542017    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:16.542023    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:16.542029    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:16.544157    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:16.544167    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:16.544173    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:16.544179    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:16 GMT
	I0114 02:32:16.544187    9007 round_trippers.go:580]     Audit-Id: 2c3d1aaa-69de-4fae-9100-50a3e764ce54
	I0114 02:32:16.544193    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:16.544199    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:16.544203    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:16.544255    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:17.035265    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:17.035279    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:17.035288    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:17.035296    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:17.039043    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:17.039059    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:17.039065    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:17.039070    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:17.039077    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:17 GMT
	I0114 02:32:17.039082    9007 round_trippers.go:580]     Audit-Id: 0521d5e4-bcd6-4b5f-beeb-fea5bd9b0314
	I0114 02:32:17.039093    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:17.039099    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:17.039198    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:17.039576    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:17.039585    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:17.039593    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:17.039600    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:17.043512    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:17.043527    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:17.043534    9007 round_trippers.go:580]     Audit-Id: 9f6d7fdc-a1af-4e69-8740-04c1dc5634da
	I0114 02:32:17.043538    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:17.043543    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:17.043548    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:17.043554    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:17.043560    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:17 GMT
	I0114 02:32:17.043655    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:17.537226    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:17.537280    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:17.537298    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:17.537315    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:17.540826    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:17.540840    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:17.540849    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:17 GMT
	I0114 02:32:17.540854    9007 round_trippers.go:580]     Audit-Id: 7ac38a0e-97e4-418a-a1a1-cf25e4b9ca9c
	I0114 02:32:17.540859    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:17.540864    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:17.540869    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:17.540875    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:17.540943    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:17.541229    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:17.541236    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:17.541242    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:17.541247    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:17.543719    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:17.543729    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:17.543734    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:17.543740    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:17 GMT
	I0114 02:32:17.543751    9007 round_trippers.go:580]     Audit-Id: 6ec51d42-c466-453b-bb9d-b681cc7906b3
	I0114 02:32:17.543758    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:17.543765    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:17.543771    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:17.543843    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:18.035137    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:18.035151    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:18.035157    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:18.035162    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:18.037857    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:18.037871    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:18.037878    9007 round_trippers.go:580]     Audit-Id: f33bfc33-2a3c-4ef0-9b99-b21a8591196a
	I0114 02:32:18.037884    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:18.037893    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:18.037899    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:18.037904    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:18.037910    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:18 GMT
	I0114 02:32:18.037993    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:18.038273    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:18.038281    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:18.038287    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:18.038292    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:18.040616    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:18.040629    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:18.040635    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:18.040640    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:18.040645    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:18.040649    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:18.040655    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:18 GMT
	I0114 02:32:18.040660    9007 round_trippers.go:580]     Audit-Id: 04efb013-ef94-44fe-aa34-ab751fa14ed6
	I0114 02:32:18.040843    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:18.535744    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:18.535771    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:18.535834    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:18.535847    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:18.539686    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:18.539696    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:18.539702    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:18 GMT
	I0114 02:32:18.539707    9007 round_trippers.go:580]     Audit-Id: 848d4d86-753f-4624-8361-e68a6e4668f7
	I0114 02:32:18.539712    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:18.539720    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:18.539726    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:18.539731    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:18.539786    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:18.540042    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:18.540050    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:18.540057    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:18.540062    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:18.542227    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:18.542238    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:18.542244    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:18.542268    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:18 GMT
	I0114 02:32:18.542283    9007 round_trippers.go:580]     Audit-Id: f28ff3d9-f6ae-4594-acaf-7eca84ea5170
	I0114 02:32:18.542291    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:18.542298    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:18.542307    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:18.542449    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:18.542640    9007 pod_ready.go:102] pod "etcd-multinode-022829" in "kube-system" namespace has status "Ready":"False"
	I0114 02:32:19.035312    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:19.035340    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:19.035353    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:19.035364    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:19.039298    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:19.039313    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:19.039320    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:19 GMT
	I0114 02:32:19.039326    9007 round_trippers.go:580]     Audit-Id: 7b06b2de-634d-4023-99bd-78d7b75dcce1
	I0114 02:32:19.039331    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:19.039338    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:19.039343    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:19.039348    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:19.039401    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:19.039648    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:19.039655    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:19.039661    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:19.039666    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:19.041897    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:19.041906    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:19.041914    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:19.041920    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:19 GMT
	I0114 02:32:19.041925    9007 round_trippers.go:580]     Audit-Id: 55ac49b6-56ec-46d1-adf7-06496db18c67
	I0114 02:32:19.041930    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:19.041935    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:19.041939    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:19.042018    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:19.535210    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:19.535228    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:19.535237    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:19.535245    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:19.538480    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:19.538491    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:19.538498    9007 round_trippers.go:580]     Audit-Id: 2be90e77-6930-46d3-8f24-1a922ea051b5
	I0114 02:32:19.538503    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:19.538508    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:19.538513    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:19.538518    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:19.538523    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:19 GMT
	I0114 02:32:19.538579    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:19.538836    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:19.538843    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:19.538849    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:19.538854    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:19.540841    9007 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 02:32:19.540851    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:19.540859    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:19.540865    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:19.540871    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:19.540875    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:19.540881    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:19 GMT
	I0114 02:32:19.540885    9007 round_trippers.go:580]     Audit-Id: 2fcfcf32-764d-4e38-b6c9-0814ad583fc9
	I0114 02:32:19.540941    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:20.035281    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:20.035304    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:20.035317    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:20.035328    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:20.039094    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:20.039109    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:20.039116    9007 round_trippers.go:580]     Audit-Id: 1f26eaac-7d74-499e-b089-3feac61ef623
	I0114 02:32:20.039126    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:20.039131    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:20.039135    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:20.039140    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:20.039145    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:20 GMT
	I0114 02:32:20.039208    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"765","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6045 chars]
	I0114 02:32:20.039456    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:20.039463    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:20.039469    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:20.039474    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:20.041659    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:20.041667    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:20.041673    9007 round_trippers.go:580]     Audit-Id: 53852745-3747-4303-a3c3-400a9e4d0aa4
	I0114 02:32:20.041678    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:20.041683    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:20.041688    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:20.041693    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:20.041698    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:20 GMT
	I0114 02:32:20.041765    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:20.041941    9007 pod_ready.go:92] pod "etcd-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:20.041951    9007 pod_ready.go:81] duration metric: took 6.012270896s waiting for pod "etcd-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:20.041961    9007 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:20.041986    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:20.041991    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:20.041997    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:20.042002    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:20.044280    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:20.044289    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:20.044295    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:20.044301    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:20 GMT
	I0114 02:32:20.044306    9007 round_trippers.go:580]     Audit-Id: 789be0dd-0966-4d1c-8e23-3be8718b5fb4
	I0114 02:32:20.044311    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:20.044316    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:20.044322    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:20.044412    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:20.044673    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:20.044679    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:20.044685    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:20.044691    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:20.046776    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:20.046785    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:20.046791    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:20.046796    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:20.046802    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:20 GMT
	I0114 02:32:20.046806    9007 round_trippers.go:580]     Audit-Id: d4a08213-918a-41e3-a1d4-6054b544f39b
	I0114 02:32:20.046812    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:20.046817    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:20.046861    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:20.547663    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:20.547683    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:20.547696    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:20.547706    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:20.551816    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:20.551827    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:20.551833    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:20.551838    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:20.551844    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:20.551848    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:20.551854    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:20 GMT
	I0114 02:32:20.551858    9007 round_trippers.go:580]     Audit-Id: a0481b75-e34f-40b7-a671-654a0bf1f81d
	I0114 02:32:20.551971    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:20.552335    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:20.552349    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:20.552358    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:20.552366    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:20.554685    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:20.554694    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:20.554700    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:20.554705    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:20.554711    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:20 GMT
	I0114 02:32:20.554716    9007 round_trippers.go:580]     Audit-Id: 81dc5e18-61f3-4c57-b44e-0fce7d654bbc
	I0114 02:32:20.554725    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:20.554730    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:20.554797    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:21.047487    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:21.047508    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:21.047521    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:21.047531    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:21.051457    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:21.051471    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:21.051479    9007 round_trippers.go:580]     Audit-Id: 6dcd3831-9996-4aba-940e-a68b406fdb53
	I0114 02:32:21.051486    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:21.051494    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:21.051501    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:21.051506    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:21.051512    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:21 GMT
	I0114 02:32:21.052072    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:21.052370    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:21.052379    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:21.052385    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:21.052390    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:21.054586    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:21.054597    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:21.054603    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:21 GMT
	I0114 02:32:21.054608    9007 round_trippers.go:580]     Audit-Id: 29507011-5709-49c9-813a-57b4474681dc
	I0114 02:32:21.054613    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:21.054618    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:21.054623    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:21.054628    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:21.054674    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:21.549250    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:21.549275    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:21.549307    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:21.549319    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:21.553434    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:21.553449    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:21.553456    9007 round_trippers.go:580]     Audit-Id: 8edeca49-52a7-400d-94f6-0c3e892f3f26
	I0114 02:32:21.553463    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:21.553470    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:21.553477    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:21.553484    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:21.553492    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:21 GMT
	I0114 02:32:21.553595    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:21.553904    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:21.553910    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:21.553916    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:21.553921    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:21.556246    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:21.556256    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:21.556263    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:21 GMT
	I0114 02:32:21.556269    9007 round_trippers.go:580]     Audit-Id: 2a023f3e-44ab-49d5-a6e1-28598aed2ce2
	I0114 02:32:21.556274    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:21.556279    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:21.556283    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:21.556288    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:21.556637    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:22.049299    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:22.049321    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:22.049334    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:22.049345    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:22.053720    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:22.053732    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:22.053738    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:22.053743    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:22.053749    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:22 GMT
	I0114 02:32:22.053753    9007 round_trippers.go:580]     Audit-Id: c62392cd-868e-4bbc-bf0b-7f8ddf33c0c3
	I0114 02:32:22.053759    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:22.053764    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:22.053853    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:22.054166    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:22.054174    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:22.054182    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:22.054190    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:22.056206    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:22.056215    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:22.056221    9007 round_trippers.go:580]     Audit-Id: 66b11b91-0714-43d0-a265-dfbbe472030e
	I0114 02:32:22.056227    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:22.056233    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:22.056237    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:22.056243    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:22.056247    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:22 GMT
	I0114 02:32:22.056290    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:22.056475    9007 pod_ready.go:102] pod "kube-apiserver-multinode-022829" in "kube-system" namespace has status "Ready":"False"
	I0114 02:32:22.547552    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:22.547575    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:22.547589    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:22.547600    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:22.551909    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:22.551921    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:22.551927    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:22.551938    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:22.551944    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:22.551949    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:22.551954    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:22 GMT
	I0114 02:32:22.551959    9007 round_trippers.go:580]     Audit-Id: 0cd6db79-abd6-4f93-b018-1704da588545
	I0114 02:32:22.552032    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:22.552324    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:22.552330    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:22.552336    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:22.552342    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:22.554428    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:22.554438    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:22.554444    9007 round_trippers.go:580]     Audit-Id: c114a831-cba2-4f44-9eff-fa07d320f21e
	I0114 02:32:22.554449    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:22.554455    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:22.554460    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:22.554465    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:22.554470    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:22 GMT
	I0114 02:32:22.554518    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:23.047948    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:23.047971    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:23.047984    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:23.047994    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:23.052659    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:23.052673    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:23.052680    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:23.052684    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:23.052689    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:23.052695    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 02:32:23.052700    9007 round_trippers.go:580]     Audit-Id: 41af783c-b311-4fe1-b72f-b05f03a04d14
	I0114 02:32:23.052705    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:23.052804    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:23.053092    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:23.053098    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:23.053105    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:23.053111    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:23.055221    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:23.055230    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:23.055236    9007 round_trippers.go:580]     Audit-Id: cf6fdac7-e1c0-4aaa-a131-6e06498ab0a5
	I0114 02:32:23.055242    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:23.055247    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:23.055252    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:23.055257    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:23.055263    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 02:32:23.055302    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:23.547997    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:23.548020    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:23.548034    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:23.548044    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:23.552225    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:23.552242    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:23.552250    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:23.552257    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:23.552264    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:23.552270    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:23.552278    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 02:32:23.552284    9007 round_trippers.go:580]     Audit-Id: f6ca9fff-ab05-4c0c-ac41-5ff32c7f5139
	I0114 02:32:23.552398    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:23.552706    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:23.552712    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:23.552718    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:23.552724    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:23.554888    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:23.554897    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:23.554903    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:23.554908    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:23.554913    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:23.554918    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:23.554923    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 02:32:23.554927    9007 round_trippers.go:580]     Audit-Id: a6d2b574-3e09-49a4-ad4c-29a622796d61
	I0114 02:32:23.554983    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:24.048239    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:24.048254    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:24.048263    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:24.048270    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:24.051039    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:24.051049    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:24.051055    9007 round_trippers.go:580]     Audit-Id: 3089f126-ecbe-404b-92c5-31176aee2b3e
	I0114 02:32:24.051059    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:24.051065    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:24.051070    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:24.051076    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:24.051081    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 02:32:24.051156    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:24.051442    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:24.051449    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:24.051456    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:24.051463    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:24.053614    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:24.053625    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:24.053631    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:24.053636    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 02:32:24.053643    9007 round_trippers.go:580]     Audit-Id: 62d5e1c6-aa9a-46c2-972b-9f8a44b4022a
	I0114 02:32:24.053648    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:24.053653    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:24.053658    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:24.053700    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:24.547377    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:24.547398    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:24.547411    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:24.547421    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:24.551626    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:24.551639    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:24.551645    9007 round_trippers.go:580]     Audit-Id: 77037102-9134-4c35-9faa-f7a2d4386c23
	I0114 02:32:24.551651    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:24.551656    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:24.551661    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:24.551666    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:24.551671    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 02:32:24.551761    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:24.552043    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:24.552050    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:24.552056    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:24.552068    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:24.554382    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:24.554391    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:24.554397    9007 round_trippers.go:580]     Audit-Id: a3db9850-5b3b-42a6-8c12-1fe4c4f2c4fa
	I0114 02:32:24.554402    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:24.554408    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:24.554412    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:24.554418    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:24.554422    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 02:32:24.554469    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:24.554656    9007 pod_ready.go:102] pod "kube-apiserver-multinode-022829" in "kube-system" namespace has status "Ready":"False"
	I0114 02:32:25.047479    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:25.047500    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.047513    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.047523    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.051755    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:25.051766    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.051773    9007 round_trippers.go:580]     Audit-Id: 1c661f70-904f-4e72-a358-5045c260708f
	I0114 02:32:25.051779    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.051785    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.051790    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.051795    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.051800    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.051860    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"792","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8429 chars]
	I0114 02:32:25.052133    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:25.052140    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.052146    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.052151    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.054236    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.054245    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.054251    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.054256    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.054262    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.054266    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.054271    9007 round_trippers.go:580]     Audit-Id: 9223b7f3-f2e1-4de0-a41c-728472ba2810
	I0114 02:32:25.054276    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.054315    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:25.054497    9007 pod_ready.go:92] pod "kube-apiserver-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:25.054507    9007 pod_ready.go:81] duration metric: took 5.0125291s waiting for pod "kube-apiserver-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.054515    9007 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.054543    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-022829
	I0114 02:32:25.054547    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.054553    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.054558    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.056680    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.056691    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.056697    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.056703    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.056707    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.056713    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.056718    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.056723    9007 round_trippers.go:580]     Audit-Id: 2eac0d91-4d46-4a44-917a-99cd9ae49a3d
	I0114 02:32:25.056776    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-022829","namespace":"kube-system","uid":"3ecd3fea-11b6-4dd0-9ac1-200f293b0e22","resourceVersion":"768","creationTimestamp":"2023-01-14T10:28:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cf97c75e822bcdf884e5298e8f141a84","kubernetes.io/config.mirror":"cf97c75e822bcdf884e5298e8f141a84","kubernetes.io/config.seen":"2023-01-14T10:28:46.070561468Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8004 chars]
	I0114 02:32:25.057022    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:25.057029    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.057035    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.057054    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.059077    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.059086    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.059092    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.059097    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.059103    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.059107    9007 round_trippers.go:580]     Audit-Id: 9128f849-89a6-47e4-bd0d-31e76a4ea6e1
	I0114 02:32:25.059113    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.059117    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.059162    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:25.059339    9007 pod_ready.go:92] pod "kube-controller-manager-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:25.059346    9007 pod_ready.go:81] duration metric: took 4.825915ms waiting for pod "kube-controller-manager-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.059353    9007 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6bgqj" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.059378    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-6bgqj
	I0114 02:32:25.059383    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.059388    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.059394    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.061593    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.061602    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.061607    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.061612    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.061617    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.061622    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.061628    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.061632    9007 round_trippers.go:580]     Audit-Id: bbcbbb18-aeb7-4c51-ba3f-7757c0f401ec
	I0114 02:32:25.061671    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6bgqj","generateName":"kube-proxy-","namespace":"kube-system","uid":"330a14fa-1ce0-4857-81a1-2988087382d4","resourceVersion":"679","creationTimestamp":"2023-01-14T10:30:18Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c7ecd323-5445-4d86-89b2-536132fa201e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:30:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7ecd323-5445-4d86-89b2-536132fa201e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I0114 02:32:25.061898    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m03
	I0114 02:32:25.061904    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.061910    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.061916    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.063791    9007 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 02:32:25.063800    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.063806    9007 round_trippers.go:580]     Audit-Id: 3f51acea-b865-4233-a2ea-e37ae3a6f8b6
	I0114 02:32:25.063813    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.063817    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.063822    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.063827    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.063832    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.063979    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829-m03","uid":"24958ca9-a14e-431f-b462-d3bfbcd7c387","resourceVersion":"692","creationTimestamp":"2023-01-14T10:31:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4323 chars]
	I0114 02:32:25.064139    9007 pod_ready.go:92] pod "kube-proxy-6bgqj" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:25.064146    9007 pod_ready.go:81] duration metric: took 4.78819ms waiting for pod "kube-proxy-6bgqj" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.064152    9007 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7p92j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.064176    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-7p92j
	I0114 02:32:25.064181    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.064187    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.064193    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.065914    9007 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 02:32:25.065923    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.065929    9007 round_trippers.go:580]     Audit-Id: c37a8e69-60e7-4d7a-aff2-edb1a78973c9
	I0114 02:32:25.065934    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.065939    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.065944    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.065949    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.065954    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.066094    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7p92j","generateName":"kube-proxy-","namespace":"kube-system","uid":"abe462b8-5607-4e29-b040-12678d7ec756","resourceVersion":"473","creationTimestamp":"2023-01-14T10:29:34Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c7ecd323-5445-4d86-89b2-536132fa201e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7ecd323-5445-4d86-89b2-536132fa201e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I0114 02:32:25.066312    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m02
	I0114 02:32:25.066318    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.066324    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.066329    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.068489    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.068498    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.068505    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.068510    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.068515    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.068520    9007 round_trippers.go:580]     Audit-Id: 1f1928d4-1e06-4535-ab72-d4e91efbfb4b
	I0114 02:32:25.068526    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.068531    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.068571    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829-m02","uid":"3911d4d5-57fa-4f76-9a4f-ea2b104e8003","resourceVersion":"538","creationTimestamp":"2023-01-14T10:29:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4506 chars]
	I0114 02:32:25.068728    9007 pod_ready.go:92] pod "kube-proxy-7p92j" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:25.068735    9007 pod_ready.go:81] duration metric: took 4.577571ms waiting for pod "kube-proxy-7p92j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.068740    9007 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pplrc" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.068784    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-pplrc
	I0114 02:32:25.068789    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.068794    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.068800    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.070556    9007 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 02:32:25.070566    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.070571    9007 round_trippers.go:580]     Audit-Id: fe68db55-59f7-46eb-8b06-92e68a3e8b49
	I0114 02:32:25.070577    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.070581    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.070587    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.070592    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.070597    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.070788    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pplrc","generateName":"kube-proxy-","namespace":"kube-system","uid":"f6acf6b8-0d1e-4694-85de-f70fb0bcfee7","resourceVersion":"743","creationTimestamp":"2023-01-14T10:29:10Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c7ecd323-5445-4d86-89b2-536132fa201e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7ecd323-5445-4d86-89b2-536132fa201e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0114 02:32:25.071016    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:25.071022    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.071028    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.071034    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.072851    9007 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 02:32:25.072859    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.072864    9007 round_trippers.go:580]     Audit-Id: 95a77ecb-7495-40fd-a52d-0c3b2e3f8436
	I0114 02:32:25.072869    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.072875    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.072879    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.072885    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.072889    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.073063    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:25.073235    9007 pod_ready.go:92] pod "kube-proxy-pplrc" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:25.073241    9007 pod_ready.go:81] duration metric: took 4.496352ms waiting for pod "kube-proxy-pplrc" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.073247    9007 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.248533    9007 request.go:614] Waited for 175.226969ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022829
	I0114 02:32:25.248593    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022829
	I0114 02:32:25.248604    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.248617    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.248628    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.252665    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:25.252676    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.252681    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.252686    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.252691    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.252695    9007 round_trippers.go:580]     Audit-Id: 73869de0-69ff-4fcb-8e9b-d785d147adc8
	I0114 02:32:25.252700    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.252705    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.252794    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-022829","namespace":"kube-system","uid":"dec76631-6f7c-433f-87e4-2d0c847b6f29","resourceVersion":"781","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"802a051d3df9ed6e4a14219bcab9d87d","kubernetes.io/config.mirror":"802a051d3df9ed6e4a14219bcab9d87d","kubernetes.io/config.seen":"2023-01-14T10:28:46.070562243Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4886 chars]
	I0114 02:32:25.447985    9007 request.go:614] Waited for 194.961996ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:25.448076    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:25.448087    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.448102    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.448114    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.451648    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:25.451661    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.451669    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.451675    9007 round_trippers.go:580]     Audit-Id: b998b8bd-ec3e-49ab-868f-f9835834f4be
	I0114 02:32:25.451681    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.451687    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.451693    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.451700    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.451768    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:25.451979    9007 pod_ready.go:92] pod "kube-scheduler-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:25.451986    9007 pod_ready.go:81] duration metric: took 378.732619ms waiting for pod "kube-scheduler-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.451993    9007 pod_ready.go:38] duration metric: took 11.494429703s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 02:32:25.452004    9007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 02:32:25.460203    9007 command_runner.go:130] > -16
	I0114 02:32:25.460220    9007 ops.go:34] apiserver oom_adj: -16
	I0114 02:32:25.460249    9007 kubeadm.go:631] restartCluster took 22.966695382s
	I0114 02:32:25.460261    9007 kubeadm.go:398] StartCluster complete in 23.005164118s
	I0114 02:32:25.460275    9007 settings.go:142] acquiring lock: {Name:mka95467446367990e489ec54b84107091d6186f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:32:25.460365    9007 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:32:25.460766    9007 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/kubeconfig: {Name:mkb6d1db5780815291441dc67b348461b9325651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:32:25.461375    9007 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:32:25.461567    9007 kapi.go:59] client config for multinode-022829: &rest.Config{Host:"https://127.0.0.1:51427", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 02:32:25.461761    9007 round_trippers.go:463] GET https://127.0.0.1:51427/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0114 02:32:25.461767    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.461773    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.461778    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.464260    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.464270    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.464276    9007 round_trippers.go:580]     Content-Length: 291
	I0114 02:32:25.464281    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.464286    9007 round_trippers.go:580]     Audit-Id: 8eafcb1b-4383-47a7-8950-4032a578f9e0
	I0114 02:32:25.464291    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.464296    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.464304    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.464310    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.464322    9007 request.go:1154] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"caadf10f-dc39-47dc-8b33-5d3e20072eab","resourceVersion":"787","creationTimestamp":"2023-01-14T10:28:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0114 02:32:25.464410    9007 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-022829" rescaled to 1
	I0114 02:32:25.464440    9007 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 02:32:25.464456    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 02:32:25.464503    9007 addons.go:486] enableAddons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0114 02:32:25.488288    9007 out.go:177] * Verifying Kubernetes components...
	I0114 02:32:25.464630    9007 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:32:25.488362    9007 addons.go:65] Setting storage-provisioner=true in profile "multinode-022829"
	I0114 02:32:25.488368    9007 addons.go:65] Setting default-storageclass=true in profile "multinode-022829"
	I0114 02:32:25.530273    9007 addons.go:227] Setting addon storage-provisioner=true in "multinode-022829"
	I0114 02:32:25.530291    9007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 02:32:25.530292    9007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-022829"
	W0114 02:32:25.530295    9007 addons.go:236] addon storage-provisioner should already be in state true
	I0114 02:32:25.520819    9007 command_runner.go:130] > apiVersion: v1
	I0114 02:32:25.530319    9007 command_runner.go:130] > data:
	I0114 02:32:25.530328    9007 command_runner.go:130] >   Corefile: |
	I0114 02:32:25.530332    9007 command_runner.go:130] >     .:53 {
	I0114 02:32:25.530348    9007 command_runner.go:130] >         errors
	I0114 02:32:25.530350    9007 host.go:66] Checking if "multinode-022829" exists ...
	I0114 02:32:25.530357    9007 command_runner.go:130] >         health {
	I0114 02:32:25.530370    9007 command_runner.go:130] >            lameduck 5s
	I0114 02:32:25.530376    9007 command_runner.go:130] >         }
	I0114 02:32:25.530379    9007 command_runner.go:130] >         ready
	I0114 02:32:25.530384    9007 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0114 02:32:25.530388    9007 command_runner.go:130] >            pods insecure
	I0114 02:32:25.530392    9007 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0114 02:32:25.530396    9007 command_runner.go:130] >            ttl 30
	I0114 02:32:25.530399    9007 command_runner.go:130] >         }
	I0114 02:32:25.530404    9007 command_runner.go:130] >         prometheus :9153
	I0114 02:32:25.530407    9007 command_runner.go:130] >         hosts {
	I0114 02:32:25.530411    9007 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I0114 02:32:25.530417    9007 command_runner.go:130] >            fallthrough
	I0114 02:32:25.530421    9007 command_runner.go:130] >         }
	I0114 02:32:25.530425    9007 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0114 02:32:25.530434    9007 command_runner.go:130] >            max_concurrent 1000
	I0114 02:32:25.530438    9007 command_runner.go:130] >         }
	I0114 02:32:25.530442    9007 command_runner.go:130] >         cache 30
	I0114 02:32:25.530446    9007 command_runner.go:130] >         loop
	I0114 02:32:25.530453    9007 command_runner.go:130] >         reload
	I0114 02:32:25.530458    9007 command_runner.go:130] >         loadbalance
	I0114 02:32:25.530461    9007 command_runner.go:130] >     }
	I0114 02:32:25.530465    9007 command_runner.go:130] > kind: ConfigMap
	I0114 02:32:25.530468    9007 command_runner.go:130] > metadata:
	I0114 02:32:25.530473    9007 command_runner.go:130] >   creationTimestamp: "2023-01-14T10:28:57Z"
	I0114 02:32:25.530476    9007 command_runner.go:130] >   name: coredns
	I0114 02:32:25.530480    9007 command_runner.go:130] >   namespace: kube-system
	I0114 02:32:25.530484    9007 command_runner.go:130] >   resourceVersion: "373"
	I0114 02:32:25.530489    9007 command_runner.go:130] >   uid: 014885d0-0d84-4e89-ad26-9cef16bc04dc
	I0114 02:32:25.530566    9007 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0114 02:32:25.530598    9007 cli_runner.go:164] Run: docker container inspect multinode-022829 --format={{.State.Status}}
	I0114 02:32:25.530710    9007 cli_runner.go:164] Run: docker container inspect multinode-022829 --format={{.State.Status}}
	I0114 02:32:25.541611    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:25.595371    9007 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:32:25.616343    9007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 02:32:25.616674    9007 kapi.go:59] client config for multinode-022829: &rest.Config{Host:"https://127.0.0.1:51427", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 02:32:25.674638    9007 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 02:32:25.674664    9007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 02:32:25.674897    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:25.675730    9007 round_trippers.go:463] GET https://127.0.0.1:51427/apis/storage.k8s.io/v1/storageclasses
	I0114 02:32:25.675931    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.676004    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.676022    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.692618    9007 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0114 02:32:25.692650    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.692659    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.692664    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.692691    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.692697    9007 round_trippers.go:580]     Content-Length: 1273
	I0114 02:32:25.692702    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.692707    9007 round_trippers.go:580]     Audit-Id: d846d4fc-122c-4cd1-86eb-d5636caf93e9
	I0114 02:32:25.692713    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.692750    9007 request.go:1154] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"792"},"items":[{"metadata":{"name":"standard","uid":"22ee76b0-cf85-4ae8-85bf-3be3c87e3cba","resourceVersion":"382","creationTimestamp":"2023-01-14T10:29:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0114 02:32:25.693168    9007 request.go:1154] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"22ee76b0-cf85-4ae8-85bf-3be3c87e3cba","resourceVersion":"382","creationTimestamp":"2023-01-14T10:29:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0114 02:32:25.693200    9007 round_trippers.go:463] PUT https://127.0.0.1:51427/apis/storage.k8s.io/v1/storageclasses/standard
	I0114 02:32:25.693204    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.693211    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.693217    9007 round_trippers.go:473]     Content-Type: application/json
	I0114 02:32:25.693222    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.696781    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:25.696793    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.696799    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.696822    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.696830    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.696837    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.696844    9007 round_trippers.go:580]     Content-Length: 1220
	I0114 02:32:25.696850    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.696854    9007 round_trippers.go:580]     Audit-Id: 3c8efc8c-029f-486f-9f00-edf0e1069039
	I0114 02:32:25.696879    9007 request.go:1154] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"22ee76b0-cf85-4ae8-85bf-3be3c87e3cba","resourceVersion":"382","creationTimestamp":"2023-01-14T10:29:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0114 02:32:25.696952    9007 addons.go:227] Setting addon default-storageclass=true in "multinode-022829"
	W0114 02:32:25.696960    9007 addons.go:236] addon default-storageclass should already be in state true
	I0114 02:32:25.696980    9007 host.go:66] Checking if "multinode-022829" exists ...
	I0114 02:32:25.697379    9007 cli_runner.go:164] Run: docker container inspect multinode-022829 --format={{.State.Status}}
	I0114 02:32:25.699132    9007 node_ready.go:35] waiting up to 6m0s for node "multinode-022829" to be "Ready" ...
	I0114 02:32:25.699213    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:25.699218    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.699224    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.699230    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.701880    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.701896    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.701904    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.701933    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.701946    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.701956    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.701967    9007 round_trippers.go:580]     Audit-Id: aee60fdc-b101-4e8a-be69-58d01f68c4f7
	I0114 02:32:25.701977    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.702146    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:25.702381    9007 node_ready.go:49] node "multinode-022829" has status "Ready":"True"
	I0114 02:32:25.702390    9007 node_ready.go:38] duration metric: took 3.238289ms waiting for node "multinode-022829" to be "Ready" ...
	I0114 02:32:25.702396    9007 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 02:32:25.736900    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:25.756451    9007 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 02:32:25.756463    9007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 02:32:25.756553    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:25.813585    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:25.827418    9007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 02:32:25.847507    9007 request.go:614] Waited for 145.076103ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:25.847571    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:25.847576    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.847583    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.847590    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.851705    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:25.851722    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.851731    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.851737    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.851744    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.851752    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.851759    9007 round_trippers.go:580]     Audit-Id: 34b9c5ce-2a62-4e22-aedb-f16503fa37b0
	I0114 02:32:25.851765    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.854725    9007 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"792"},"items":[{"metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"756","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84540 chars]
	I0114 02:32:25.856827    9007 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-xg88j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.904132    9007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 02:32:26.038521    9007 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0114 02:32:26.039965    9007 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0114 02:32:26.042071    9007 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0114 02:32:26.044067    9007 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0114 02:32:26.045826    9007 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0114 02:32:26.047746    9007 request.go:614] Waited for 190.883825ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/coredns-565d847f94-xg88j
	I0114 02:32:26.047772    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/coredns-565d847f94-xg88j
	I0114 02:32:26.047777    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:26.047783    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:26.047789    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:26.050023    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:26.050036    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:26.050044    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:26.050055    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 02:32:26.050060    9007 round_trippers.go:580]     Audit-Id: b98b5afb-8c3e-4f3d-b6a7-5d29ef084765
	I0114 02:32:26.050065    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:26.050070    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:26.050075    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:26.050148    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"756","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6552 chars]
	I0114 02:32:26.052282    9007 command_runner.go:130] > pod/storage-provisioner configured
	I0114 02:32:26.154125    9007 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0114 02:32:26.202699    9007 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0114 02:32:26.223596    9007 addons.go:488] enableAddons completed in 759.101003ms
	I0114 02:32:26.248148    9007 request.go:614] Waited for 197.660565ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:26.248237    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:26.248248    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:26.248261    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:26.248274    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:26.252257    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:26.252273    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:26.252281    9007 round_trippers.go:580]     Audit-Id: 371364d0-f414-4639-b3d7-b080f86132e0
	I0114 02:32:26.252288    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:26.252295    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:26.252302    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:26.252309    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:26.252315    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 02:32:26.252402    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:26.252642    9007 pod_ready.go:92] pod "coredns-565d847f94-xg88j" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:26.252648    9007 pod_ready.go:81] duration metric: took 395.809499ms waiting for pod "coredns-565d847f94-xg88j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:26.252654    9007 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:26.448647    9007 request.go:614] Waited for 195.925176ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:26.448711    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:26.448727    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:26.448744    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:26.448761    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:26.453218    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:26.453233    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:26.453240    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:26.453245    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:26.453249    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:26.453256    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:26.453261    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 02:32:26.453266    9007 round_trippers.go:580]     Audit-Id: 2288153a-1320-4bd8-b5e4-10b788caf9e5
	I0114 02:32:26.453345    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"765","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6045 chars]
	I0114 02:32:26.648599    9007 request.go:614] Waited for 194.955808ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:26.648656    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:26.648667    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:26.648680    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:26.648726    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:26.652899    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:26.652910    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:26.652916    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 02:32:26.652921    9007 round_trippers.go:580]     Audit-Id: 1e2102f6-a764-4629-91e8-8b401d477418
	I0114 02:32:26.652925    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:26.652931    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:26.652935    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:26.652941    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:26.653029    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:26.653246    9007 pod_ready.go:92] pod "etcd-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:26.653252    9007 pod_ready.go:81] duration metric: took 400.592504ms waiting for pod "etcd-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:26.653263    9007 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:26.847559    9007 request.go:614] Waited for 194.260248ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:26.847615    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:26.847621    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:26.847630    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:26.847645    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:26.850432    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:26.850443    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:26.850449    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:26.850454    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:26.850465    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:26.850471    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 02:32:26.850491    9007 round_trippers.go:580]     Audit-Id: 9c5c3248-e25c-48ec-8c10-a53fefa40371
	I0114 02:32:26.850499    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:26.850669    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"792","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8429 chars]
	I0114 02:32:27.047575    9007 request.go:614] Waited for 196.597625ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:27.047656    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:27.047666    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:27.047679    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:27.047689    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:27.052001    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:27.052018    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:27.052030    9007 round_trippers.go:580]     Audit-Id: 4f760086-4c49-4c81-99c3-7353707cd9d7
	I0114 02:32:27.052036    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:27.052041    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:27.052050    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:27.052056    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:27.052060    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 02:32:27.052117    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:27.052324    9007 pod_ready.go:92] pod "kube-apiserver-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:27.052331    9007 pod_ready.go:81] duration metric: took 399.061734ms waiting for pod "kube-apiserver-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:27.052338    9007 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:27.247614    9007 request.go:614] Waited for 195.234933ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-022829
	I0114 02:32:27.247696    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-022829
	I0114 02:32:27.247707    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:27.247724    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:27.247737    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:27.251622    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:27.251637    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:27.251645    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 02:32:27.251652    9007 round_trippers.go:580]     Audit-Id: a6c57865-700f-452f-9ea6-37b6e70839c1
	I0114 02:32:27.251658    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:27.251665    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:27.251671    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:27.251678    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:27.252098    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-022829","namespace":"kube-system","uid":"3ecd3fea-11b6-4dd0-9ac1-200f293b0e22","resourceVersion":"768","creationTimestamp":"2023-01-14T10:28:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cf97c75e822bcdf884e5298e8f141a84","kubernetes.io/config.mirror":"cf97c75e822bcdf884e5298e8f141a84","kubernetes.io/config.seen":"2023-01-14T10:28:46.070561468Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8004 chars]
	I0114 02:32:27.449487    9007 request.go:614] Waited for 197.08782ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:27.449568    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:27.449578    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:27.449620    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:27.449632    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:27.453615    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:27.453629    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:27.453638    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:27.453649    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:27.453656    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:27.453663    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:27.453670    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 02:32:27.453677    9007 round_trippers.go:580]     Audit-Id: f0ca6b36-204f-4cee-b89b-40c2015540dd
	I0114 02:32:27.453759    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:27.454039    9007 pod_ready.go:92] pod "kube-controller-manager-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:27.454049    9007 pod_ready.go:81] duration metric: took 401.705514ms waiting for pod "kube-controller-manager-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:27.454059    9007 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6bgqj" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:27.648864    9007 request.go:614] Waited for 194.74606ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-6bgqj
	I0114 02:32:27.648983    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-6bgqj
	I0114 02:32:27.648995    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:27.649006    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:27.649018    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:27.653516    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:27.653534    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:27.653542    9007 round_trippers.go:580]     Audit-Id: 990f399a-b351-4eb2-8b75-da91279d7703
	I0114 02:32:27.653565    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:27.653579    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:27.653592    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:27.653600    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:27.653608    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 02:32:27.653680    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6bgqj","generateName":"kube-proxy-","namespace":"kube-system","uid":"330a14fa-1ce0-4857-81a1-2988087382d4","resourceVersion":"679","creationTimestamp":"2023-01-14T10:30:18Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c7ecd323-5445-4d86-89b2-536132fa201e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:30:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7ecd323-5445-4d86-89b2-536132fa201e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I0114 02:32:27.848059    9007 request.go:614] Waited for 194.052924ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m03
	I0114 02:32:27.848108    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m03
	I0114 02:32:27.848117    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:27.848129    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:27.848141    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:27.852661    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:27.852675    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:27.852681    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:27.852686    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 02:32:27.852696    9007 round_trippers.go:580]     Audit-Id: 71eaddfb-05a5-480c-baf3-2b5a085d7d02
	I0114 02:32:27.852702    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:27.852706    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:27.852711    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:27.852770    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829-m03","uid":"24958ca9-a14e-431f-b462-d3bfbcd7c387","resourceVersion":"692","creationTimestamp":"2023-01-14T10:31:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4323 chars]
	I0114 02:32:27.852951    9007 pod_ready.go:92] pod "kube-proxy-6bgqj" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:27.852958    9007 pod_ready.go:81] duration metric: took 398.892958ms waiting for pod "kube-proxy-6bgqj" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:27.852965    9007 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7p92j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:28.047846    9007 request.go:614] Waited for 194.814141ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-7p92j
	I0114 02:32:28.047917    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-7p92j
	I0114 02:32:28.047958    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:28.047973    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:28.047984    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:28.052582    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:28.052595    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:28.052601    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:28.052606    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:28.052611    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:28.052620    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:28.052625    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 02:32:28.052630    9007 round_trippers.go:580]     Audit-Id: c8872bae-9f98-4358-bf26-d15addc55588
	I0114 02:32:28.052700    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7p92j","generateName":"kube-proxy-","namespace":"kube-system","uid":"abe462b8-5607-4e29-b040-12678d7ec756","resourceVersion":"473","creationTimestamp":"2023-01-14T10:29:34Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c7ecd323-5445-4d86-89b2-536132fa201e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7ecd323-5445-4d86-89b2-536132fa201e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I0114 02:32:28.248933    9007 request.go:614] Waited for 195.87012ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m02
	I0114 02:32:28.248986    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m02
	I0114 02:32:28.248999    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:28.249012    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:28.249024    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:28.252824    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:28.252837    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:28.252843    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 02:32:28.252853    9007 round_trippers.go:580]     Audit-Id: ffbb1ceb-915d-4550-b4af-3aaea48b6945
	I0114 02:32:28.252859    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:28.252863    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:28.252868    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:28.252873    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:28.252930    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829-m02","uid":"3911d4d5-57fa-4f76-9a4f-ea2b104e8003","resourceVersion":"538","creationTimestamp":"2023-01-14T10:29:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4506 chars]
	I0114 02:32:28.253118    9007 pod_ready.go:92] pod "kube-proxy-7p92j" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:28.253125    9007 pod_ready.go:81] duration metric: took 400.142072ms waiting for pod "kube-proxy-7p92j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:28.253135    9007 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pplrc" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:28.447672    9007 request.go:614] Waited for 194.483956ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-pplrc
	I0114 02:32:28.447800    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-pplrc
	I0114 02:32:28.447812    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:28.447824    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:28.447838    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:28.451803    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:28.451818    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:28.451827    9007 round_trippers.go:580]     Audit-Id: 7a46278e-e3e7-4a9d-beda-1f1265525c87
	I0114 02:32:28.451834    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:28.451841    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:28.451847    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:28.451853    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:28.451861    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 02:32:28.452055    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pplrc","generateName":"kube-proxy-","namespace":"kube-system","uid":"f6acf6b8-0d1e-4694-85de-f70fb0bcfee7","resourceVersion":"743","creationTimestamp":"2023-01-14T10:29:10Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c7ecd323-5445-4d86-89b2-536132fa201e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7ecd323-5445-4d86-89b2-536132fa201e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0114 02:32:28.649514    9007 request.go:614] Waited for 197.145139ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:28.649645    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:28.649656    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:28.649669    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:28.649681    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:28.654083    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:28.654100    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:28.654108    9007 round_trippers.go:580]     Audit-Id: 80134d0f-7081-458d-9566-ea651b337b18
	I0114 02:32:28.654115    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:28.654123    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:28.654129    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:28.654138    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:28.654144    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 02:32:28.654219    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:28.654447    9007 pod_ready.go:92] pod "kube-proxy-pplrc" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:28.654454    9007 pod_ready.go:81] duration metric: took 401.311363ms waiting for pod "kube-proxy-pplrc" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:28.654461    9007 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:28.848320    9007 request.go:614] Waited for 193.806303ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022829
	I0114 02:32:28.848432    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022829
	I0114 02:32:28.848443    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:28.848455    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:28.848465    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:28.852696    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:28.852711    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:28.852719    9007 round_trippers.go:580]     Audit-Id: 3ed19b2f-86f7-4fdc-a901-6c407dd3f1fc
	I0114 02:32:28.852726    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:28.852738    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:28.852747    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:28.852757    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:28.852785    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 02:32:28.852842    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-022829","namespace":"kube-system","uid":"dec76631-6f7c-433f-87e4-2d0c847b6f29","resourceVersion":"781","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"802a051d3df9ed6e4a14219bcab9d87d","kubernetes.io/config.mirror":"802a051d3df9ed6e4a14219bcab9d87d","kubernetes.io/config.seen":"2023-01-14T10:28:46.070562243Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4886 chars]
	I0114 02:32:29.048257    9007 request.go:614] Waited for 195.179619ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:29.048344    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:29.048354    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:29.048366    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:29.048379    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:29.052682    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:29.052698    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:29.052705    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 02:32:29.052710    9007 round_trippers.go:580]     Audit-Id: 6239e34e-c627-4a3f-9abe-e3d872c4338f
	I0114 02:32:29.052715    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:29.052720    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:29.052725    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:29.052730    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:29.052791    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:29.053009    9007 pod_ready.go:92] pod "kube-scheduler-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:29.053016    9007 pod_ready.go:81] duration metric: took 398.549335ms waiting for pod "kube-scheduler-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:29.053026    9007 pod_ready.go:38] duration metric: took 3.350613929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 02:32:29.053039    9007 api_server.go:51] waiting for apiserver process to appear ...
	I0114 02:32:29.053100    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:32:29.062410    9007 command_runner.go:130] > 1733
	I0114 02:32:29.063052    9007 api_server.go:71] duration metric: took 3.598589987s to wait for apiserver process to appear ...
	I0114 02:32:29.063062    9007 api_server.go:87] waiting for apiserver healthz status ...
	I0114 02:32:29.063068    9007 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51427/healthz ...
	I0114 02:32:29.068495    9007 api_server.go:278] https://127.0.0.1:51427/healthz returned 200:
	ok
	I0114 02:32:29.068534    9007 round_trippers.go:463] GET https://127.0.0.1:51427/version
	I0114 02:32:29.068540    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:29.068547    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:29.068553    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:29.069712    9007 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 02:32:29.069721    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:29.069727    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:29.069734    9007 round_trippers.go:580]     Content-Length: 263
	I0114 02:32:29.069740    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 02:32:29.069745    9007 round_trippers.go:580]     Audit-Id: e96b41a7-3c6f-4e31-900b-d736e335fad9
	I0114 02:32:29.069751    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:29.069755    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:29.069760    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:29.069775    9007 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0114 02:32:29.069799    9007 api_server.go:140] control plane version: v1.25.3
	I0114 02:32:29.069805    9007 api_server.go:130] duration metric: took 6.739418ms to wait for apiserver health ...
	I0114 02:32:29.069812    9007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 02:32:29.247714    9007 request.go:614] Waited for 177.857132ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:29.247770    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:29.247783    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:29.247808    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:29.247863    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:29.253465    9007 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0114 02:32:29.253480    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:29.253487    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:29.253491    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 02:32:29.253499    9007 round_trippers.go:580]     Audit-Id: bb243b56-e120-45db-bea5-3c4152e1e1a6
	I0114 02:32:29.253504    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:29.253509    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:29.253513    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:29.254460    9007 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"792"},"items":[{"metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"756","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84540 chars]
	I0114 02:32:29.256384    9007 system_pods.go:59] 12 kube-system pods found
	I0114 02:32:29.256394    9007 system_pods.go:61] "coredns-565d847f94-xg88j" [8ba9cbef-253e-46ad-aa78-55875dc5939b] Running
	I0114 02:32:29.256398    9007 system_pods.go:61] "etcd-multinode-022829" [f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8] Running
	I0114 02:32:29.256401    9007 system_pods.go:61] "kindnet-2ffw5" [6e2e34df-4259-4f9d-a1d8-b7c33a252211] Running
	I0114 02:32:29.256405    9007 system_pods.go:61] "kindnet-crlwb" [129cddf5-10fc-4467-ab9d-d9a47d195213] Running
	I0114 02:32:29.256409    9007 system_pods.go:61] "kindnet-pqh2t" [cb280495-e617-461c-a259-e28b47f301d6] Running
	I0114 02:32:29.256414    9007 system_pods.go:61] "kube-apiserver-multinode-022829" [b153813e-4767-4643-9cc4-ab5c1f8a2441] Running
	I0114 02:32:29.256419    9007 system_pods.go:61] "kube-controller-manager-multinode-022829" [3ecd3fea-11b6-4dd0-9ac1-200f293b0e22] Running
	I0114 02:32:29.256424    9007 system_pods.go:61] "kube-proxy-6bgqj" [330a14fa-1ce0-4857-81a1-2988087382d4] Running
	I0114 02:32:29.256428    9007 system_pods.go:61] "kube-proxy-7p92j" [abe462b8-5607-4e29-b040-12678d7ec756] Running
	I0114 02:32:29.256432    9007 system_pods.go:61] "kube-proxy-pplrc" [f6acf6b8-0d1e-4694-85de-f70fb0bcfee7] Running
	I0114 02:32:29.256438    9007 system_pods.go:61] "kube-scheduler-multinode-022829" [dec76631-6f7c-433f-87e4-2d0c847b6f29] Running
	I0114 02:32:29.256456    9007 system_pods.go:61] "storage-provisioner" [29960f5b-1391-43dd-9ebb-93c76a894fa2] Running
	I0114 02:32:29.256465    9007 system_pods.go:74] duration metric: took 186.648054ms to wait for pod list to return data ...
	I0114 02:32:29.256473    9007 default_sa.go:34] waiting for default service account to be created ...
	I0114 02:32:29.447712    9007 request.go:614] Waited for 191.192358ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/default/serviceaccounts
	I0114 02:32:29.447791    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/default/serviceaccounts
	I0114 02:32:29.447801    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:29.447845    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:29.447862    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:29.451925    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:29.451936    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:29.451942    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 02:32:29.451952    9007 round_trippers.go:580]     Audit-Id: 35d9a91e-de35-4fd6-8d50-bc367017e522
	I0114 02:32:29.451958    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:29.451962    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:29.451967    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:29.451972    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:29.451978    9007 round_trippers.go:580]     Content-Length: 261
	I0114 02:32:29.451990    9007 request.go:1154] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"792"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5c806b58-da2e-4969-a790-2c7b416acba0","resourceVersion":"316","creationTimestamp":"2023-01-14T10:29:10Z"}}]}
	I0114 02:32:29.452111    9007 default_sa.go:45] found service account: "default"
	I0114 02:32:29.452118    9007 default_sa.go:55] duration metric: took 195.640566ms for default service account to be created ...
	I0114 02:32:29.452123    9007 system_pods.go:116] waiting for k8s-apps to be running ...
	I0114 02:32:29.647824    9007 request.go:614] Waited for 195.660293ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:29.647904    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:29.647916    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:29.647929    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:29.647941    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:29.653264    9007 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0114 02:32:29.653288    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:29.653299    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:29.653307    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:29.653315    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 02:32:29.653322    9007 round_trippers.go:580]     Audit-Id: f0f7011f-dd67-4613-8936-a2cabf271ca7
	I0114 02:32:29.653330    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:29.653340    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:29.654372    9007 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"792"},"items":[{"metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"756","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84540 chars]
	I0114 02:32:29.656330    9007 system_pods.go:86] 12 kube-system pods found
	I0114 02:32:29.656341    9007 system_pods.go:89] "coredns-565d847f94-xg88j" [8ba9cbef-253e-46ad-aa78-55875dc5939b] Running
	I0114 02:32:29.656350    9007 system_pods.go:89] "etcd-multinode-022829" [f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8] Running
	I0114 02:32:29.656354    9007 system_pods.go:89] "kindnet-2ffw5" [6e2e34df-4259-4f9d-a1d8-b7c33a252211] Running
	I0114 02:32:29.656358    9007 system_pods.go:89] "kindnet-crlwb" [129cddf5-10fc-4467-ab9d-d9a47d195213] Running
	I0114 02:32:29.656362    9007 system_pods.go:89] "kindnet-pqh2t" [cb280495-e617-461c-a259-e28b47f301d6] Running
	I0114 02:32:29.656365    9007 system_pods.go:89] "kube-apiserver-multinode-022829" [b153813e-4767-4643-9cc4-ab5c1f8a2441] Running
	I0114 02:32:29.656372    9007 system_pods.go:89] "kube-controller-manager-multinode-022829" [3ecd3fea-11b6-4dd0-9ac1-200f293b0e22] Running
	I0114 02:32:29.656376    9007 system_pods.go:89] "kube-proxy-6bgqj" [330a14fa-1ce0-4857-81a1-2988087382d4] Running
	I0114 02:32:29.656380    9007 system_pods.go:89] "kube-proxy-7p92j" [abe462b8-5607-4e29-b040-12678d7ec756] Running
	I0114 02:32:29.656384    9007 system_pods.go:89] "kube-proxy-pplrc" [f6acf6b8-0d1e-4694-85de-f70fb0bcfee7] Running
	I0114 02:32:29.656387    9007 system_pods.go:89] "kube-scheduler-multinode-022829" [dec76631-6f7c-433f-87e4-2d0c847b6f29] Running
	I0114 02:32:29.656391    9007 system_pods.go:89] "storage-provisioner" [29960f5b-1391-43dd-9ebb-93c76a894fa2] Running
	I0114 02:32:29.656396    9007 system_pods.go:126] duration metric: took 204.267755ms to wait for k8s-apps to be running ...
	I0114 02:32:29.656400    9007 system_svc.go:44] waiting for kubelet service to be running ....
	I0114 02:32:29.656462    9007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 02:32:29.666195    9007 system_svc.go:56] duration metric: took 9.791057ms WaitForService to wait for kubelet.
	I0114 02:32:29.666208    9007 kubeadm.go:573] duration metric: took 4.201745398s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0114 02:32:29.666238    9007 node_conditions.go:102] verifying NodePressure condition ...
	I0114 02:32:29.849499    9007 request.go:614] Waited for 183.202107ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes
	I0114 02:32:29.849638    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes
	I0114 02:32:29.849649    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:29.849664    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:29.849674    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:29.854648    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:29.854661    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:29.854668    9007 round_trippers.go:580]     Audit-Id: 9fdebe90-6a3a-43d4-8799-1f0266910e16
	I0114 02:32:29.854673    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:29.854677    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:29.854682    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:29.854691    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:29.854697    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 02:32:29.854801    9007 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"792"},"items":[{"metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16143 chars]
	I0114 02:32:29.855226    9007 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:32:29.855234    9007 node_conditions.go:123] node cpu capacity is 6
	I0114 02:32:29.855244    9007 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:32:29.855248    9007 node_conditions.go:123] node cpu capacity is 6
	I0114 02:32:29.855251    9007 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:32:29.855255    9007 node_conditions.go:123] node cpu capacity is 6
	I0114 02:32:29.855258    9007 node_conditions.go:105] duration metric: took 189.012802ms to run NodePressure ...
	I0114 02:32:29.855265    9007 start.go:217] waiting for startup goroutines ...
	I0114 02:32:29.855836    9007 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:32:29.855919    9007 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/config.json ...
	I0114 02:32:29.897495    9007 out.go:177] * Starting worker node multinode-022829-m02 in cluster multinode-022829
	I0114 02:32:29.919592    9007 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 02:32:29.940701    9007 out.go:177] * Pulling base image ...
	I0114 02:32:29.961500    9007 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:32:29.961518    9007 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 02:32:29.961537    9007 cache.go:57] Caching tarball of preloaded images
	I0114 02:32:29.961739    9007 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 02:32:29.961761    9007 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 02:32:29.962645    9007 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/config.json ...
	I0114 02:32:30.018552    9007 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 02:32:30.018565    9007 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 02:32:30.018588    9007 cache.go:193] Successfully downloaded all kic artifacts
	I0114 02:32:30.018615    9007 start.go:364] acquiring machines lock for multinode-022829-m02: {Name:mk6c619d9d56cbda4f1a28e82601a01ccd5e065f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 02:32:30.018696    9007 start.go:368] acquired machines lock for "multinode-022829-m02" in 71.367µs
	I0114 02:32:30.018718    9007 start.go:96] Skipping create...Using existing machine configuration
	I0114 02:32:30.018724    9007 fix.go:55] fixHost starting: m02
	I0114 02:32:30.018992    9007 cli_runner.go:164] Run: docker container inspect multinode-022829-m02 --format={{.State.Status}}
	I0114 02:32:30.076194    9007 fix.go:103] recreateIfNeeded on multinode-022829-m02: state=Stopped err=<nil>
	W0114 02:32:30.076215    9007 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 02:32:30.098026    9007 out.go:177] * Restarting existing docker container for "multinode-022829-m02" ...
	I0114 02:32:30.139610    9007 cli_runner.go:164] Run: docker start multinode-022829-m02
	I0114 02:32:30.467610    9007 cli_runner.go:164] Run: docker container inspect multinode-022829-m02 --format={{.State.Status}}
	I0114 02:32:30.528233    9007 kic.go:426] container "multinode-022829-m02" state is running.
	I0114 02:32:30.528831    9007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829-m02
	I0114 02:32:30.590604    9007 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/config.json ...
	I0114 02:32:30.591117    9007 machine.go:88] provisioning docker machine ...
	I0114 02:32:30.591134    9007 ubuntu.go:169] provisioning hostname "multinode-022829-m02"
	I0114 02:32:30.591213    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:30.667576    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:32:30.667849    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51454 <nil> <nil>}
	I0114 02:32:30.667860    9007 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-022829-m02 && echo "multinode-022829-m02" | sudo tee /etc/hostname
	I0114 02:32:30.836800    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-022829-m02
	
	I0114 02:32:30.836900    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:30.897502    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:32:30.897676    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51454 <nil> <nil>}
	I0114 02:32:30.897689    9007 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-022829-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-022829-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-022829-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 02:32:31.013909    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 02:32:31.013941    9007 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
	I0114 02:32:31.013966    9007 ubuntu.go:177] setting up certificates
	I0114 02:32:31.013975    9007 provision.go:83] configureAuth start
	I0114 02:32:31.014100    9007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829-m02
	I0114 02:32:31.076266    9007 provision.go:138] copyHostCerts
	I0114 02:32:31.076317    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 02:32:31.076392    9007 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
	I0114 02:32:31.076398    9007 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 02:32:31.076546    9007 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
	I0114 02:32:31.076723    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 02:32:31.076772    9007 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
	I0114 02:32:31.076777    9007 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 02:32:31.076852    9007 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
	I0114 02:32:31.076977    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 02:32:31.077029    9007 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
	I0114 02:32:31.077035    9007 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 02:32:31.077105    9007 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
	I0114 02:32:31.077233    9007 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.multinode-022829-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-022829-m02]
	I0114 02:32:31.155049    9007 provision.go:172] copyRemoteCerts
	I0114 02:32:31.155122    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 02:32:31.155190    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:31.221275    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51454 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829-m02/id_rsa Username:docker}
	I0114 02:32:31.310642    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0114 02:32:31.310729    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 02:32:31.328177    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0114 02:32:31.328267    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0114 02:32:31.346801    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0114 02:32:31.346906    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 02:32:31.364598    9007 provision.go:86] duration metric: configureAuth took 350.595855ms
	I0114 02:32:31.364611    9007 ubuntu.go:193] setting minikube options for container-runtime
	I0114 02:32:31.364801    9007 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:32:31.364882    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:31.426464    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:32:31.426642    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51454 <nil> <nil>}
	I0114 02:32:31.426652    9007 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 02:32:31.543817    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 02:32:31.543830    9007 ubuntu.go:71] root file system type: overlay
	I0114 02:32:31.543964    9007 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 02:32:31.544047    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:31.602200    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:32:31.602359    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51454 <nil> <nil>}
	I0114 02:32:31.602408    9007 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 02:32:31.727457    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 02:32:31.727576    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:31.784996    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:32:31.785151    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51454 <nil> <nil>}
	I0114 02:32:31.785164    9007 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 02:32:31.905443    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 02:32:31.905466    9007 machine.go:91] provisioned docker machine in 1.314337908s
	I0114 02:32:31.905474    9007 start.go:300] post-start starting for "multinode-022829-m02" (driver="docker")
	I0114 02:32:31.905480    9007 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 02:32:31.905567    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 02:32:31.905636    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:31.962523    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51454 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829-m02/id_rsa Username:docker}
	I0114 02:32:32.050051    9007 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 02:32:32.053688    9007 command_runner.go:130] > NAME="Ubuntu"
	I0114 02:32:32.053700    9007 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0114 02:32:32.053704    9007 command_runner.go:130] > ID=ubuntu
	I0114 02:32:32.053710    9007 command_runner.go:130] > ID_LIKE=debian
	I0114 02:32:32.053717    9007 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0114 02:32:32.053722    9007 command_runner.go:130] > VERSION_ID="20.04"
	I0114 02:32:32.053729    9007 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0114 02:32:32.053735    9007 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0114 02:32:32.053740    9007 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0114 02:32:32.053750    9007 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0114 02:32:32.053755    9007 command_runner.go:130] > VERSION_CODENAME=focal
	I0114 02:32:32.053759    9007 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0114 02:32:32.053798    9007 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 02:32:32.053812    9007 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 02:32:32.053819    9007 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 02:32:32.053824    9007 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 02:32:32.053829    9007 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
	I0114 02:32:32.053933    9007 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
	I0114 02:32:32.054115    9007 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
	I0114 02:32:32.054123    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> /etc/ssl/certs/27282.pem
	I0114 02:32:32.054334    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 02:32:32.061876    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:32:32.078838    9007 start.go:303] post-start completed in 173.354372ms
	I0114 02:32:32.078925    9007 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 02:32:32.078994    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:32.137115    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51454 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829-m02/id_rsa Username:docker}
	I0114 02:32:32.219758    9007 command_runner.go:130] > 7%
	I0114 02:32:32.219848    9007 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 02:32:32.224323    9007 command_runner.go:130] > 91G
	I0114 02:32:32.224574    9007 fix.go:57] fixHost completed within 2.205843293s
	I0114 02:32:32.224585    9007 start.go:83] releasing machines lock for "multinode-022829-m02", held for 2.205876001s
	I0114 02:32:32.224678    9007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829-m02
	I0114 02:32:32.306498    9007 out.go:177] * Found network options:
	I0114 02:32:32.327676    9007 out.go:177]   - NO_PROXY=192.168.58.2
	W0114 02:32:32.348450    9007 proxy.go:119] fail to check proxy env: Error ip not in block
	W0114 02:32:32.348487    9007 proxy.go:119] fail to check proxy env: Error ip not in block
	I0114 02:32:32.348610    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0114 02:32:32.348612    9007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 02:32:32.348671    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:32.348691    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:32.409876    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51454 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829-m02/id_rsa Username:docker}
	I0114 02:32:32.411196    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51454 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829-m02/id_rsa Username:docker}
	I0114 02:32:32.548417    9007 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0114 02:32:32.548489    9007 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0114 02:32:32.561878    9007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:32:32.638916    9007 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0114 02:32:32.724893    9007 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 02:32:32.735612    9007 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0114 02:32:32.735763    9007 command_runner.go:130] > [Unit]
	I0114 02:32:32.735775    9007 command_runner.go:130] > Description=Docker Application Container Engine
	I0114 02:32:32.735780    9007 command_runner.go:130] > Documentation=https://docs.docker.com
	I0114 02:32:32.735784    9007 command_runner.go:130] > BindsTo=containerd.service
	I0114 02:32:32.735791    9007 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0114 02:32:32.735797    9007 command_runner.go:130] > Wants=network-online.target
	I0114 02:32:32.735809    9007 command_runner.go:130] > Requires=docker.socket
	I0114 02:32:32.735816    9007 command_runner.go:130] > StartLimitBurst=3
	I0114 02:32:32.735821    9007 command_runner.go:130] > StartLimitIntervalSec=60
	I0114 02:32:32.735828    9007 command_runner.go:130] > [Service]
	I0114 02:32:32.735834    9007 command_runner.go:130] > Type=notify
	I0114 02:32:32.735840    9007 command_runner.go:130] > Restart=on-failure
	I0114 02:32:32.735854    9007 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0114 02:32:32.735864    9007 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0114 02:32:32.735883    9007 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0114 02:32:32.735894    9007 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0114 02:32:32.735903    9007 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0114 02:32:32.735913    9007 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0114 02:32:32.735921    9007 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0114 02:32:32.735930    9007 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0114 02:32:32.735943    9007 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0114 02:32:32.735950    9007 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0114 02:32:32.735953    9007 command_runner.go:130] > ExecStart=
	I0114 02:32:32.735999    9007 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0114 02:32:32.736011    9007 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0114 02:32:32.736034    9007 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0114 02:32:32.736047    9007 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0114 02:32:32.736053    9007 command_runner.go:130] > LimitNOFILE=infinity
	I0114 02:32:32.736064    9007 command_runner.go:130] > LimitNPROC=infinity
	I0114 02:32:32.736075    9007 command_runner.go:130] > LimitCORE=infinity
	I0114 02:32:32.736086    9007 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0114 02:32:32.736095    9007 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0114 02:32:32.736100    9007 command_runner.go:130] > TasksMax=infinity
	I0114 02:32:32.736104    9007 command_runner.go:130] > TimeoutStartSec=0
	I0114 02:32:32.736110    9007 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0114 02:32:32.736115    9007 command_runner.go:130] > Delegate=yes
	I0114 02:32:32.736127    9007 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0114 02:32:32.736133    9007 command_runner.go:130] > KillMode=process
	I0114 02:32:32.736136    9007 command_runner.go:130] > [Install]
	I0114 02:32:32.736140    9007 command_runner.go:130] > WantedBy=multi-user.target
	I0114 02:32:32.736850    9007 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 02:32:32.736912    9007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 02:32:32.746888    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 02:32:32.759037    9007 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0114 02:32:32.759049    9007 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0114 02:32:32.759909    9007 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 02:32:32.832760    9007 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 02:32:32.909482    9007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:32:32.982395    9007 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 02:32:33.202325    9007 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0114 02:32:33.270478    9007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:32:33.352263    9007 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0114 02:32:33.362395    9007 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 02:32:33.362486    9007 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0114 02:32:33.366336    9007 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0114 02:32:33.366346    9007 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0114 02:32:33.366351    9007 command_runner.go:130] > Device: 100036h/1048630d	Inode: 128         Links: 1
	I0114 02:32:33.366357    9007 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0114 02:32:33.366366    9007 command_runner.go:130] > Access: 2023-01-14 10:32:32.654116379 +0000
	I0114 02:32:33.366371    9007 command_runner.go:130] > Modify: 2023-01-14 10:32:32.650116379 +0000
	I0114 02:32:33.366376    9007 command_runner.go:130] > Change: 2023-01-14 10:32:32.651116379 +0000
	I0114 02:32:33.366380    9007 command_runner.go:130] >  Birth: -
	I0114 02:32:33.366399    9007 start.go:472] Will wait 60s for crictl version
	I0114 02:32:33.366443    9007 ssh_runner.go:195] Run: which crictl
	I0114 02:32:33.370076    9007 command_runner.go:130] > /usr/bin/crictl
	I0114 02:32:33.370235    9007 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 02:32:33.397073    9007 command_runner.go:130] > Version:  0.1.0
	I0114 02:32:33.397086    9007 command_runner.go:130] > RuntimeName:  docker
	I0114 02:32:33.397090    9007 command_runner.go:130] > RuntimeVersion:  20.10.21
	I0114 02:32:33.397095    9007 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0114 02:32:33.399228    9007 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0114 02:32:33.399322    9007 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:32:33.427357    9007 command_runner.go:130] > 20.10.21
	I0114 02:32:33.429509    9007 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:32:33.455706    9007 command_runner.go:130] > 20.10.21
	I0114 02:32:33.481551    9007 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0114 02:32:33.523192    9007 out.go:177]   - env NO_PROXY=192.168.58.2
	I0114 02:32:33.544610    9007 cli_runner.go:164] Run: docker exec -t multinode-022829-m02 dig +short host.docker.internal
	I0114 02:32:33.654806    9007 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 02:32:33.654910    9007 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 02:32:33.659065    9007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:32:33.668923    9007 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829 for IP: 192.168.58.3
	I0114 02:32:33.669065    9007 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
	I0114 02:32:33.669137    9007 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
	I0114 02:32:33.669145    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0114 02:32:33.669173    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0114 02:32:33.669193    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0114 02:32:33.669215    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0114 02:32:33.669314    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
	W0114 02:32:33.669374    9007 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
	I0114 02:32:33.669388    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 02:32:33.669424    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
	I0114 02:32:33.669465    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
	I0114 02:32:33.669498    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
	I0114 02:32:33.669580    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:32:33.669613    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> /usr/share/ca-certificates/27282.pem
	I0114 02:32:33.669636    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:33.669658    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem -> /usr/share/ca-certificates/2728.pem
	I0114 02:32:33.669978    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 02:32:33.687135    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 02:32:33.704197    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 02:32:33.722104    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 02:32:33.739409    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
	I0114 02:32:33.756732    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 02:32:33.773943    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
	I0114 02:32:33.791455    9007 ssh_runner.go:195] Run: openssl version
	I0114 02:32:33.796780    9007 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0114 02:32:33.797153    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
	I0114 02:32:33.805505    9007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
	I0114 02:32:33.809262    9007 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 02:32:33.809427    9007 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 02:32:33.809480    9007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
	I0114 02:32:33.814578    9007 command_runner.go:130] > 3ec20f2e
	I0114 02:32:33.814995    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 02:32:33.822667    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 02:32:33.830729    9007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:33.834645    9007 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:33.834753    9007 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:33.834810    9007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:33.839794    9007 command_runner.go:130] > b5213941
	I0114 02:32:33.840130    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 02:32:33.847595    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
	I0114 02:32:33.855728    9007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
	I0114 02:32:33.859822    9007 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 02:32:33.859847    9007 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 02:32:33.859891    9007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
	I0114 02:32:33.865041    9007 command_runner.go:130] > 51391683
	I0114 02:32:33.865444    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
	I0114 02:32:33.872970    9007 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 02:32:33.940129    9007 command_runner.go:130] > systemd
	I0114 02:32:33.942178    9007 cni.go:95] Creating CNI manager for ""
	I0114 02:32:33.942192    9007 cni.go:156] 3 nodes found, recommending kindnet
	I0114 02:32:33.942205    9007 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 02:32:33.942215    9007 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-022829 NodeName:multinode-022829-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 02:32:33.942317    9007 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-022829-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 02:32:33.942367    9007 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-022829-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-022829 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 02:32:33.942439    9007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 02:32:33.949950    9007 command_runner.go:130] > kubeadm
	I0114 02:32:33.949959    9007 command_runner.go:130] > kubectl
	I0114 02:32:33.949963    9007 command_runner.go:130] > kubelet
	I0114 02:32:33.950576    9007 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 02:32:33.950641    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0114 02:32:33.957907    9007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (482 bytes)
	I0114 02:32:33.970617    9007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 02:32:33.983243    9007 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0114 02:32:33.986967    9007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:32:33.996741    9007 host.go:66] Checking if "multinode-022829" exists ...
	I0114 02:32:33.996942    9007 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:32:33.996927    9007 start.go:286] JoinCluster: &{Name:multinode-022829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-022829 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false
metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:32:33.996994    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0114 02:32:33.997057    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:34.057064    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:34.189867    9007 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 
	I0114 02:32:34.189908    9007 start.go:299] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:32:34.189929    9007 host.go:66] Checking if "multinode-022829" exists ...
	I0114 02:32:34.190179    9007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-022829-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0114 02:32:34.190237    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:34.248470    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:34.374802    9007 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0114 02:32:34.398490    9007 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-crlwb, kube-system/kube-proxy-7p92j
	I0114 02:32:37.411146    9007 command_runner.go:130] > node/multinode-022829-m02 cordoned
	I0114 02:32:37.411160    9007 command_runner.go:130] > pod "busybox-65db55d5d6-tqh8p" has DeletionTimestamp older than 1 seconds, skipping
	I0114 02:32:37.411179    9007 command_runner.go:130] > node/multinode-022829-m02 drained
	I0114 02:32:37.411199    9007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-022829-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.22099785s)
	I0114 02:32:37.411208    9007 node.go:109] successfully drained node "m02"
	I0114 02:32:37.411547    9007 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:32:37.411775    9007 kapi.go:59] client config for multinode-022829: &rest.Config{Host:"https://127.0.0.1:51427", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 02:32:37.412047    9007 request.go:1154] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0114 02:32:37.412079    9007 round_trippers.go:463] DELETE https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m02
	I0114 02:32:37.412083    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:37.412090    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:37.412095    9007 round_trippers.go:473]     Content-Type: application/json
	I0114 02:32:37.412100    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:37.415764    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:37.415776    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:37.415786    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:37.415791    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:37.415796    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:37.415801    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:37.415805    9007 round_trippers.go:580]     Content-Length: 171
	I0114 02:32:37.415811    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 02:32:37.415815    9007 round_trippers.go:580]     Audit-Id: 739df83f-6189-4f42-9d26-4657bc9bee4f
	I0114 02:32:37.415829    9007 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-022829-m02","kind":"nodes","uid":"3911d4d5-57fa-4f76-9a4f-ea2b104e8003"}}
	I0114 02:32:37.415857    9007 node.go:125] successfully deleted node "m02"
	I0114 02:32:37.415865    9007 start.go:303] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:32:37.415876    9007 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:32:37.415888    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02"
	I0114 02:32:37.487561    9007 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 02:32:37.596054    9007 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 02:32:37.596072    9007 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0114 02:32:37.613636    9007 command_runner.go:130] ! W0114 10:32:37.486754    1070 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:37.613655    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 02:32:37.613667    9007 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 02:32:37.613675    9007 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0114 02:32:37.613681    9007 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 02:32:37.613689    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 02:32:37.613702    9007 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 02:32:37.613708    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0114 02:32:37.613750    9007 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:32:37.486754    1070 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:37.613763    9007 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 02:32:37.613771    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 02:32:37.652178    9007 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0114 02:32:37.652195    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:37.655560    9007 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:37.655591    9007 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:32:37.486754    1070 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:48.704292    9007 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:32:48.704370    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02"
	I0114 02:32:48.744107    9007 command_runner.go:130] ! W0114 10:32:48.743294    1598 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:48.744887    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 02:32:48.768330    9007 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 02:32:48.772640    9007 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0114 02:32:48.834974    9007 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 02:32:48.834989    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 02:32:48.859544    9007 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 02:32:48.859559    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:48.862605    9007 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 02:32:48.862618    9007 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 02:32:48.862629    9007 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0114 02:32:48.862658    9007 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:32:48.743294    1598 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:48.862665    9007 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 02:32:48.862676    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 02:32:48.901575    9007 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0114 02:32:48.901592    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:48.901608    9007 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:48.901622    9007 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:32:48.743294    1598 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:10.509416    9007 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:33:10.509476    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02"
	I0114 02:33:10.550128    9007 command_runner.go:130] ! W0114 10:33:10.549307    1821 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:33:10.550143    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 02:33:10.573506    9007 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 02:33:10.578261    9007 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0114 02:33:10.641449    9007 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 02:33:10.641467    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 02:33:10.665877    9007 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 02:33:10.665890    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:10.669002    9007 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 02:33:10.669016    9007 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 02:33:10.669023    9007 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0114 02:33:10.669049    9007 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:10.549307    1821 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:10.669064    9007 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 02:33:10.669072    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 02:33:10.710078    9007 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0114 02:33:10.710095    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:10.710114    9007 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:10.710125    9007 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:10.549307    1821 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:36.913164    9007 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:33:36.913286    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02"
	I0114 02:33:36.951844    9007 command_runner.go:130] ! W0114 10:33:36.951096    2072 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:33:36.951859    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 02:33:36.974820    9007 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 02:33:36.979461    9007 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0114 02:33:37.042647    9007 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 02:33:37.042670    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 02:33:37.066765    9007 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 02:33:37.066780    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:37.069747    9007 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 02:33:37.069763    9007 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 02:33:37.069772    9007 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0114 02:33:37.069817    9007 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:36.951096    2072 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:37.069829    9007 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 02:33:37.069859    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 02:33:37.109456    9007 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0114 02:33:37.109473    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:37.109493    9007 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:37.109504    9007 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:36.951096    2072 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:08.757659    9007 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:34:08.757763    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02"
	I0114 02:34:08.797250    9007 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 02:34:08.898628    9007 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 02:34:08.898642    9007 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0114 02:34:08.917294    9007 command_runner.go:130] ! W0114 10:34:08.796491    2383 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:34:08.917311    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 02:34:08.917322    9007 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 02:34:08.917328    9007 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0114 02:34:08.917333    9007 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 02:34:08.917339    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 02:34:08.917348    9007 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 02:34:08.917355    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0114 02:34:08.917382    9007 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:34:08.796491    2383 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:08.917392    9007 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 02:34:08.917400    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 02:34:08.959277    9007 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0114 02:34:08.959294    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:08.959313    9007 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:08.959324    9007 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:34:08.796491    2383 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:55.771184    9007 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:34:55.771249    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02"
	I0114 02:34:55.810924    9007 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 02:34:55.910562    9007 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 02:34:55.910604    9007 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0114 02:34:55.929717    9007 command_runner.go:130] ! W0114 10:34:55.810162    2790 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:34:55.929733    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 02:34:55.929744    9007 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 02:34:55.929750    9007 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0114 02:34:55.929755    9007 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 02:34:55.929763    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 02:34:55.929773    9007 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 02:34:55.929780    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0114 02:34:55.929811    9007 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:34:55.810162    2790 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:55.929819    9007 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 02:34:55.929826    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 02:34:55.968698    9007 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0114 02:34:55.968718    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:55.968736    9007 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:55.968756    9007 start.go:288] JoinCluster complete in 2m21.971491515s
	I0114 02:34:55.990848    9007 out.go:177] 
	W0114 02:34:56.011875    9007 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:34:55.810162    2790 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:34:55.810162    2790 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 02:34:56.011909    9007 out.go:239] * 
	* 
	W0114 02:34:56.013142    9007 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 02:34:56.076892    9007 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-022829" : exit status 80
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-022829
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-022829
helpers_test.go:235: (dbg) docker inspect multinode-022829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "47ba0f35d5ab1503f803fd4aceb76652237b3f44fa9b7c9f3ccacdfee7daaf70",
	        "Created": "2023-01-14T10:28:37.471171903Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 93806,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T10:31:58.757941761Z",
	            "FinishedAt": "2023-01-14T10:31:32.976857321Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/47ba0f35d5ab1503f803fd4aceb76652237b3f44fa9b7c9f3ccacdfee7daaf70/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/47ba0f35d5ab1503f803fd4aceb76652237b3f44fa9b7c9f3ccacdfee7daaf70/hostname",
	        "HostsPath": "/var/lib/docker/containers/47ba0f35d5ab1503f803fd4aceb76652237b3f44fa9b7c9f3ccacdfee7daaf70/hosts",
	        "LogPath": "/var/lib/docker/containers/47ba0f35d5ab1503f803fd4aceb76652237b3f44fa9b7c9f3ccacdfee7daaf70/47ba0f35d5ab1503f803fd4aceb76652237b3f44fa9b7c9f3ccacdfee7daaf70-json.log",
	        "Name": "/multinode-022829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-022829:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-022829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/020390d5ad5690f077aac569ed14bf1b8da82f92542602206feb64f47cc02deb-init/diff:/var/lib/docker/overlay2/74c9e0d36b5b0c73e7df7f4bce3bd0c3d02cf9dc383bffd6fbcff44769e0e62a/diff:/var/lib/docker/overlay2/ba601a6c163e2d067928a6364b090a9785c3dd2470d90823ce10e62a47aa569f/diff:/var/lib/docker/overlay2/80b54fffffd853e7ba8f14b1c1ac90a8b75fb31aafab2d53fe628cb592a95844/diff:/var/lib/docker/overlay2/02213d03e53450db4a2d492831eba720749d97435157430d240b760477b64c78/diff:/var/lib/docker/overlay2/e3727b5662aa5fdeeef9053112ad90fb2f9aaecbfeeddefa3efb066881ae1677/diff:/var/lib/docker/overlay2/685adc0695be0cb9862d43898ceae6e6a36c3cc98f04bc25e314797bed3b1d95/diff:/var/lib/docker/overlay2/7e133e132419c5ad6565f89b3ecfdf2c9fa038e5b9c39fe81c1269cfb6bb0d22/diff:/var/lib/docker/overlay2/c4d27ebf7e050a3aee0acccdadb92fc9390befadef2b0b13b9ebe87a2af3ef50/diff:/var/lib/docker/overlay2/0f07a86eba9c199451031724816d33cb5d2e19c401514edd8c1e392fd795f1e1/diff:/var/lib/docker/overlay2/a51cfe
8ee6145a30d356888e940bfdda67bc55c29f3972b35ae93dd989943b1c/diff:/var/lib/docker/overlay2/b155ac1a426201afe2af9fba8a7ebbecd3d8271f8613d0f53dac7bb190bc977f/diff:/var/lib/docker/overlay2/7c5cec64dde89a12b95bb1a0bca411b06b69201cfdb3cc4b46cb87a5bcff9a7f/diff:/var/lib/docker/overlay2/dd54bb055fc70a41daa3f3e950f4bdadd925db2c588d7d831edb4cbb176d30c7/diff:/var/lib/docker/overlay2/f58b39c756189e32d5b9c66b5c3861eabf5ab01ebc6179fec7210d414762bf45/diff:/var/lib/docker/overlay2/6458e00e4b79399a4860e78a572cd21fd47cbca2a54d189f34bd4a438145a6f5/diff:/var/lib/docker/overlay2/66427e9f49ff5383f9f819513857efb87ee3f880df33a86ac46ebc140ff172ed/diff:/var/lib/docker/overlay2/33f03d40d23c6a829c43633ba96c4058fbf09a4cf912eb51e0ca23a65574b0a7/diff:/var/lib/docker/overlay2/e68584e2b5a5a18fbd6edeeba6d80fe43e2199775b520878ca842d463078a2d1/diff:/var/lib/docker/overlay2/a2bfe134a89cb821f2c8e5ec6b42888d30fac6a9ed1aa4853476bb33cfe2e157/diff:/var/lib/docker/overlay2/f55951d7e041b300f9842916d51648285b79860a132d032d3c23b80af7c280fa/diff:/var/lib/d
ocker/overlay2/76cb0b8d6987165c472c0c9d54491045539294d203577a4ed7fac7f7cbbf0322/diff:/var/lib/docker/overlay2/a8f6d057d4938258302dd54e9a2e99732b4a2ac5c869366e93983e3e8890d432/diff:/var/lib/docker/overlay2/16bf4a461f9fe0edba90225f752527e534469b1bfbeb5bca6315512786340bfe/diff:/var/lib/docker/overlay2/2d022a51ddd598853537ff8fbeca5b94beff9d5d7e6ca81ffe011aa35121268a/diff:/var/lib/docker/overlay2/e30d56ebfba93be441f305b1938dd2d0f847f649922524ebef1fbe3e4b3b4bf9/diff:/var/lib/docker/overlay2/12df07bd2576a7b97f383aa3fcb2535f75a901953859063d9b65944d2dd0b152/diff:/var/lib/docker/overlay2/79e70748fe1267851a900b8bca2ab4e0b34e8163714fc440602d9e0273c93421/diff:/var/lib/docker/overlay2/c4fa6441d4ff7ce1be2072a8f61c5c495ff1785d9fee891191262b893a6eff63/diff:/var/lib/docker/overlay2/748980353d2fab0e6498a85b0c558d9eb7f34703302b21298c310b98dcf4d6f9/diff:/var/lib/docker/overlay2/48f823bc2f4741841d95ac4706f52fe9d01883bce998d5c999bdc363c838b1ee/diff:/var/lib/docker/overlay2/5f4f42c0e92359fc7ea2cf540120bd09407fd1d8dee5b56896919b39d3e
70033/diff:/var/lib/docker/overlay2/4a4066d1d0f42bb48af787d9f9bd115bacffde91f4ca8c20648dad3b25f904b6/diff:/var/lib/docker/overlay2/5f1054f553934c922e4dffc5c3804a5825ed249f7df9c3da31e2081145c8749a/diff:/var/lib/docker/overlay2/a6fe8ece465ba51837f6a88e28c3b571b632f0b223900278ac4a5f5dc0577520/diff:/var/lib/docker/overlay2/ee3e9af6d65fe9d2da423711b90ee171fd35422619c22b802d5fead4f861d921/diff:/var/lib/docker/overlay2/b353b985af8b2f665218f5af5e89cb642745824e2c3b51bfe3aa58c801823c46/diff:/var/lib/docker/overlay2/4411168ee372991c59d386d2ec200449c718a5343f5efa545ad9552a5c349310/diff:/var/lib/docker/overlay2/eeb668637d75a5802fe62d8a71458c68195302676ff09eb1e973d633e24e8588/diff:/var/lib/docker/overlay2/67b1dd580c0c0e994c4fe1233fef817d2c085438c80485c1f2eec64392c7b709/diff:/var/lib/docker/overlay2/1ae992d82b2e0a4c2a667c7d0d9e243efda7ee206e17c862bf093fa976667cc3/diff:/var/lib/docker/overlay2/ab6d393733a7abd2a9bd5612a0cef5adc3cded30c596c212828a8475c9c29779/diff:/var/lib/docker/overlay2/c927272ea82dc6bb318adcf8eb94099eece7af
9df7f454ff921048ba7ce589d2/diff:/var/lib/docker/overlay2/722309d1402eda210190af6c69b6f9998aff66e78e5bbc972ae865d10f0474d7/diff:/var/lib/docker/overlay2/c8a4e498ea2b5c051ced01db75d10e4ed1619bd3acc28c000789b600f8a7e23b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/020390d5ad5690f077aac569ed14bf1b8da82f92542602206feb64f47cc02deb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/020390d5ad5690f077aac569ed14bf1b8da82f92542602206feb64f47cc02deb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/020390d5ad5690f077aac569ed14bf1b8da82f92542602206feb64f47cc02deb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-022829",
	                "Source": "/var/lib/docker/volumes/multinode-022829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-022829",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-022829",
	                "name.minikube.sigs.k8s.io": "multinode-022829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "acd1371d4261badd965e5e8df99e73a6290d46abc8982025ad26ff28800922ec",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51425"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51426"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "51427"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/acd1371d4261",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-022829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "47ba0f35d5ab",
	                        "multinode-022829"
	                    ],
	                    "NetworkID": "b5a9eaf855221e295b12d4b293a8ed52b74cd627efdf7aae7dfde5d19c32b66a",
	                    "EndpointID": "22cd2ad93973fc17ee277bf74fb0941aa1f55343091d10d151c4c82e1fdded1e",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-022829 -n multinode-022829
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-022829 logs -n 25: (3.128487159s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-022829 ssh -n                                                                                                     | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | multinode-022829-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-022829 cp multinode-022829-m02:/home/docker/cp-test.txt                                                           | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1501450262/001/cp-test_multinode-022829-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-022829 ssh -n                                                                                                     | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | multinode-022829-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-022829 cp multinode-022829-m02:/home/docker/cp-test.txt                                                           | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | multinode-022829:/home/docker/cp-test_multinode-022829-m02_multinode-022829.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-022829 ssh -n                                                                                                     | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | multinode-022829-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-022829 ssh -n multinode-022829 sudo cat                                                                           | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | /home/docker/cp-test_multinode-022829-m02_multinode-022829.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-022829 cp multinode-022829-m02:/home/docker/cp-test.txt                                                           | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | multinode-022829-m03:/home/docker/cp-test_multinode-022829-m02_multinode-022829-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-022829 ssh -n                                                                                                     | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | multinode-022829-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-022829 ssh -n multinode-022829-m03 sudo cat                                                                       | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | /home/docker/cp-test_multinode-022829-m02_multinode-022829-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-022829 cp testdata/cp-test.txt                                                                                    | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | multinode-022829-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-022829 ssh -n                                                                                                     | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | multinode-022829-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-022829 cp multinode-022829-m03:/home/docker/cp-test.txt                                                           | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1501450262/001/cp-test_multinode-022829-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-022829 ssh -n                                                                                                     | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | multinode-022829-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-022829 cp multinode-022829-m03:/home/docker/cp-test.txt                                                           | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | multinode-022829:/home/docker/cp-test_multinode-022829-m03_multinode-022829.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-022829 ssh -n                                                                                                     | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | multinode-022829-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-022829 ssh -n multinode-022829 sudo cat                                                                           | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | /home/docker/cp-test_multinode-022829-m03_multinode-022829.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-022829 cp multinode-022829-m03:/home/docker/cp-test.txt                                                           | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | multinode-022829-m02:/home/docker/cp-test_multinode-022829-m03_multinode-022829-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-022829 ssh -n                                                                                                     | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | multinode-022829-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-022829 ssh -n multinode-022829-m02 sudo cat                                                                       | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	|         | /home/docker/cp-test_multinode-022829-m03_multinode-022829-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-022829 node stop m03                                                                                              | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:30 PST | 14 Jan 23 02:30 PST |
	| node    | multinode-022829 node start                                                                                                 | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:31 PST | 14 Jan 23 02:31 PST |
	|         | m03 --alsologtostderr                                                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-022829                                                                                                    | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:31 PST |                     |
	| stop    | -p multinode-022829                                                                                                         | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:31 PST | 14 Jan 23 02:31 PST |
	| start   | -p multinode-022829                                                                                                         | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:31 PST |                     |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	| node    | list -p multinode-022829                                                                                                    | multinode-022829 | jenkins | v1.28.0 | 14 Jan 23 02:34 PST |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 02:31:57
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 02:31:57.463292    9007 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:31:57.463546    9007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:31:57.463553    9007 out.go:309] Setting ErrFile to fd 2...
	I0114 02:31:57.463557    9007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:31:57.463695    9007 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 02:31:57.464204    9007 out.go:303] Setting JSON to false
	I0114 02:31:57.482895    9007 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1891,"bootTime":1673690426,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 02:31:57.483001    9007 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 02:31:57.505119    9007 out.go:177] * [multinode-022829] minikube v1.28.0 on Darwin 13.0.1
	I0114 02:31:57.526493    9007 notify.go:220] Checking for updates...
	I0114 02:31:57.547566    9007 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 02:31:57.569881    9007 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:31:57.591669    9007 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:31:57.634594    9007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:31:57.677621    9007 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	I0114 02:31:57.700699    9007 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:31:57.700818    9007 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 02:31:57.762971    9007 docker.go:138] docker version: linux-20.10.21
	I0114 02:31:57.763119    9007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:31:57.902048    9007 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-14 10:31:57.81235194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:31:57.946152    9007 out.go:177] * Using the docker driver based on existing profile
	I0114 02:31:57.974376    9007 start.go:294] selected driver: docker
	I0114 02:31:57.974404    9007 start.go:838] validating driver "docker" against &{Name:multinode-022829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-022829 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:31:57.974570    9007 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 02:31:57.974817    9007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:31:58.115544    9007 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:47 SystemTime:2023-01-14 10:31:58.026014826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:31:58.117946    9007 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 02:31:58.117974    9007 cni.go:95] Creating CNI manager for ""
	I0114 02:31:58.117982    9007 cni.go:156] 3 nodes found, recommending kindnet
	I0114 02:31:58.118002    9007 start_flags.go:319] config:
	{Name:multinode-022829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-022829 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false
nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:31:58.159657    9007 out.go:177] * Starting control plane node multinode-022829 in cluster multinode-022829
	I0114 02:31:58.182909    9007 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 02:31:58.205599    9007 out.go:177] * Pulling base image ...
	I0114 02:31:58.263758    9007 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:31:58.263817    9007 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 02:31:58.263859    9007 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 02:31:58.263898    9007 cache.go:57] Caching tarball of preloaded images
	I0114 02:31:58.264082    9007 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 02:31:58.264102    9007 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 02:31:58.264865    9007 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/config.json ...
	I0114 02:31:58.321989    9007 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 02:31:58.322005    9007 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 02:31:58.322033    9007 cache.go:193] Successfully downloaded all kic artifacts
	I0114 02:31:58.322071    9007 start.go:364] acquiring machines lock for multinode-022829: {Name:mk7213570c70d360de889fa6f810478b8bc1fac4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 02:31:58.322163    9007 start.go:368] acquired machines lock for "multinode-022829" in 72.701µs
	I0114 02:31:58.322188    9007 start.go:96] Skipping create...Using existing machine configuration
	I0114 02:31:58.322200    9007 fix.go:55] fixHost starting: 
	I0114 02:31:58.322461    9007 cli_runner.go:164] Run: docker container inspect multinode-022829 --format={{.State.Status}}
	I0114 02:31:58.379755    9007 fix.go:103] recreateIfNeeded on multinode-022829: state=Stopped err=<nil>
	W0114 02:31:58.379785    9007 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 02:31:58.400551    9007 out.go:177] * Restarting existing docker container for "multinode-022829" ...
	I0114 02:31:58.444490    9007 cli_runner.go:164] Run: docker start multinode-022829
	I0114 02:31:58.764834    9007 cli_runner.go:164] Run: docker container inspect multinode-022829 --format={{.State.Status}}
	I0114 02:31:58.821807    9007 kic.go:426] container "multinode-022829" state is running.
	I0114 02:31:58.822437    9007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829
	I0114 02:31:58.884410    9007 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/config.json ...
	I0114 02:31:58.884931    9007 machine.go:88] provisioning docker machine ...
	I0114 02:31:58.884962    9007 ubuntu.go:169] provisioning hostname "multinode-022829"
	I0114 02:31:58.885067    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:31:58.951248    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:31:58.951449    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51423 <nil> <nil>}
	I0114 02:31:58.951462    9007 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-022829 && echo "multinode-022829" | sudo tee /etc/hostname
	I0114 02:31:59.118133    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-022829
	
	I0114 02:31:59.118294    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:31:59.178534    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:31:59.178700    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51423 <nil> <nil>}
	I0114 02:31:59.178720    9007 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-022829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-022829/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-022829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 02:31:59.294812    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 02:31:59.294842    9007 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
	I0114 02:31:59.294864    9007 ubuntu.go:177] setting up certificates
	I0114 02:31:59.294872    9007 provision.go:83] configureAuth start
	I0114 02:31:59.294959    9007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829
	I0114 02:31:59.354024    9007 provision.go:138] copyHostCerts
	I0114 02:31:59.354076    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 02:31:59.354145    9007 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
	I0114 02:31:59.354153    9007 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 02:31:59.354274    9007 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
	I0114 02:31:59.354450    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 02:31:59.354490    9007 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
	I0114 02:31:59.354495    9007 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 02:31:59.354571    9007 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
	I0114 02:31:59.354687    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 02:31:59.354726    9007 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
	I0114 02:31:59.354731    9007 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 02:31:59.354797    9007 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
	I0114 02:31:59.354913    9007 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.multinode-022829 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-022829]
	I0114 02:31:59.528999    9007 provision.go:172] copyRemoteCerts
	I0114 02:31:59.529076    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 02:31:59.529144    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:31:59.589255    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:31:59.676659    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0114 02:31:59.676763    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 02:31:59.695893    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0114 02:31:59.695988    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0114 02:31:59.718141    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0114 02:31:59.718243    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 02:31:59.736241    9007 provision.go:86] duration metric: configureAuth took 441.354539ms
	I0114 02:31:59.736264    9007 ubuntu.go:193] setting minikube options for container-runtime
	I0114 02:31:59.736526    9007 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:31:59.736664    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:31:59.796959    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:31:59.797124    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51423 <nil> <nil>}
	I0114 02:31:59.797134    9007 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 02:31:59.914493    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 02:31:59.914521    9007 ubuntu.go:71] root file system type: overlay
	I0114 02:31:59.914752    9007 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 02:31:59.914857    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:31:59.974871    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:31:59.975033    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51423 <nil> <nil>}
	I0114 02:31:59.975084    9007 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 02:32:00.100829    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 02:32:00.100945    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:00.157586    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:32:00.157737    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51423 <nil> <nil>}
	I0114 02:32:00.157750    9007 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 02:32:00.279682    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 02:32:00.279698    9007 machine.go:91] provisioned docker machine in 1.394754368s
	I0114 02:32:00.279708    9007 start.go:300] post-start starting for "multinode-022829" (driver="docker")
	I0114 02:32:00.279715    9007 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 02:32:00.279792    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 02:32:00.279858    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:00.335832    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:00.421877    9007 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 02:32:00.425589    9007 command_runner.go:130] > NAME="Ubuntu"
	I0114 02:32:00.425598    9007 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0114 02:32:00.425602    9007 command_runner.go:130] > ID=ubuntu
	I0114 02:32:00.425605    9007 command_runner.go:130] > ID_LIKE=debian
	I0114 02:32:00.425610    9007 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0114 02:32:00.425613    9007 command_runner.go:130] > VERSION_ID="20.04"
	I0114 02:32:00.425618    9007 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0114 02:32:00.425622    9007 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0114 02:32:00.425628    9007 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0114 02:32:00.425638    9007 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0114 02:32:00.425642    9007 command_runner.go:130] > VERSION_CODENAME=focal
	I0114 02:32:00.425647    9007 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0114 02:32:00.425693    9007 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 02:32:00.425704    9007 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 02:32:00.425711    9007 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 02:32:00.425718    9007 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 02:32:00.425725    9007 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
	I0114 02:32:00.425824    9007 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
	I0114 02:32:00.426005    9007 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
	I0114 02:32:00.426014    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> /etc/ssl/certs/27282.pem
	I0114 02:32:00.426224    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 02:32:00.433578    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:32:00.450457    9007 start.go:303] post-start completed in 170.738296ms
	I0114 02:32:00.450544    9007 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 02:32:00.450611    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:00.507359    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:00.591303    9007 command_runner.go:130] > 7%!
	(MISSING)I0114 02:32:00.591397    9007 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 02:32:00.595621    9007 command_runner.go:130] > 91G
	I0114 02:32:00.595903    9007 fix.go:57] fixHost completed within 2.273699494s
	I0114 02:32:00.595914    9007 start.go:83] releasing machines lock for "multinode-022829", held for 2.273737486s
	I0114 02:32:00.596018    9007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829
	I0114 02:32:00.653920    9007 ssh_runner.go:195] Run: cat /version.json
	I0114 02:32:00.653928    9007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 02:32:00.654004    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:00.654004    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:00.713447    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:00.713589    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:00.796325    9007 command_runner.go:130] > {"iso_version": "v1.28.0-1668700269-15235", "kicbase_version": "v0.0.36-1668787669-15272", "minikube_version": "v1.28.0", "commit": "c883d3041e11322fb5c977f082b70bf31015848d"}
	I0114 02:32:00.851104    9007 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0114 02:32:00.853263    9007 ssh_runner.go:195] Run: systemctl --version
	I0114 02:32:00.858017    9007 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.18)
	I0114 02:32:00.858046    9007 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0114 02:32:00.858275    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0114 02:32:00.865671    9007 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0114 02:32:00.878291    9007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:32:00.944303    9007 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0114 02:32:01.028725    9007 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 02:32:01.038014    9007 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0114 02:32:01.038136    9007 command_runner.go:130] > [Unit]
	I0114 02:32:01.038145    9007 command_runner.go:130] > Description=Docker Application Container Engine
	I0114 02:32:01.038150    9007 command_runner.go:130] > Documentation=https://docs.docker.com
	I0114 02:32:01.038185    9007 command_runner.go:130] > BindsTo=containerd.service
	I0114 02:32:01.038193    9007 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0114 02:32:01.038198    9007 command_runner.go:130] > Wants=network-online.target
	I0114 02:32:01.038204    9007 command_runner.go:130] > Requires=docker.socket
	I0114 02:32:01.038210    9007 command_runner.go:130] > StartLimitBurst=3
	I0114 02:32:01.038221    9007 command_runner.go:130] > StartLimitIntervalSec=60
	I0114 02:32:01.038232    9007 command_runner.go:130] > [Service]
	I0114 02:32:01.038240    9007 command_runner.go:130] > Type=notify
	I0114 02:32:01.038246    9007 command_runner.go:130] > Restart=on-failure
	I0114 02:32:01.038254    9007 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0114 02:32:01.038268    9007 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0114 02:32:01.038278    9007 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0114 02:32:01.038283    9007 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0114 02:32:01.038290    9007 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0114 02:32:01.038295    9007 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0114 02:32:01.038302    9007 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0114 02:32:01.038314    9007 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0114 02:32:01.038320    9007 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0114 02:32:01.038323    9007 command_runner.go:130] > ExecStart=
	I0114 02:32:01.038335    9007 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0114 02:32:01.038340    9007 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0114 02:32:01.038347    9007 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0114 02:32:01.038353    9007 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0114 02:32:01.038356    9007 command_runner.go:130] > LimitNOFILE=infinity
	I0114 02:32:01.038360    9007 command_runner.go:130] > LimitNPROC=infinity
	I0114 02:32:01.038366    9007 command_runner.go:130] > LimitCORE=infinity
	I0114 02:32:01.038372    9007 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0114 02:32:01.038378    9007 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0114 02:32:01.038382    9007 command_runner.go:130] > TasksMax=infinity
	I0114 02:32:01.038385    9007 command_runner.go:130] > TimeoutStartSec=0
	I0114 02:32:01.038390    9007 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0114 02:32:01.038394    9007 command_runner.go:130] > Delegate=yes
	I0114 02:32:01.038410    9007 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0114 02:32:01.038416    9007 command_runner.go:130] > KillMode=process
	I0114 02:32:01.038429    9007 command_runner.go:130] > [Install]
	I0114 02:32:01.038434    9007 command_runner.go:130] > WantedBy=multi-user.target
	I0114 02:32:01.038916    9007 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 02:32:01.038985    9007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 02:32:01.048432    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 02:32:01.060759    9007 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0114 02:32:01.060770    9007 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0114 02:32:01.061514    9007 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 02:32:01.126219    9007 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 02:32:01.195178    9007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:32:01.263390    9007 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 02:32:01.507993    9007 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0114 02:32:01.576870    9007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:32:01.646399    9007 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0114 02:32:01.656079    9007 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 02:32:01.656162    9007 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0114 02:32:01.659969    9007 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0114 02:32:01.659981    9007 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0114 02:32:01.659987    9007 command_runner.go:130] > Device: 96h/150d	Inode: 117         Links: 1
	I0114 02:32:01.659992    9007 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0114 02:32:01.660002    9007 command_runner.go:130] > Access: 2023-01-14 10:32:00.951712281 +0000
	I0114 02:32:01.660009    9007 command_runner.go:130] > Modify: 2023-01-14 10:32:00.951712281 +0000
	I0114 02:32:01.660017    9007 command_runner.go:130] > Change: 2023-01-14 10:32:00.952712281 +0000
	I0114 02:32:01.660023    9007 command_runner.go:130] >  Birth: -
	I0114 02:32:01.660059    9007 start.go:472] Will wait 60s for crictl version
	I0114 02:32:01.660107    9007 ssh_runner.go:195] Run: which crictl
	I0114 02:32:01.663599    9007 command_runner.go:130] > /usr/bin/crictl
	I0114 02:32:01.663674    9007 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 02:32:01.692039    9007 command_runner.go:130] > Version:  0.1.0
	I0114 02:32:01.692054    9007 command_runner.go:130] > RuntimeName:  docker
	I0114 02:32:01.692059    9007 command_runner.go:130] > RuntimeVersion:  20.10.21
	I0114 02:32:01.692063    9007 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0114 02:32:01.694096    9007 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0114 02:32:01.694188    9007 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:32:01.721427    9007 command_runner.go:130] > 20.10.21
	I0114 02:32:01.723692    9007 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:32:01.749375    9007 command_runner.go:130] > 20.10.21
	I0114 02:32:01.797214    9007 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0114 02:32:01.797476    9007 cli_runner.go:164] Run: docker exec -t multinode-022829 dig +short host.docker.internal
	I0114 02:32:01.912460    9007 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 02:32:01.912592    9007 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 02:32:01.916872    9007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:32:01.926568    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:01.983057    9007 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:32:01.983146    9007 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 02:32:02.004708    9007 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I0114 02:32:02.004722    9007 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I0114 02:32:02.004728    9007 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I0114 02:32:02.004734    9007 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I0114 02:32:02.004738    9007 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0114 02:32:02.004742    9007 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0114 02:32:02.004747    9007 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0114 02:32:02.004759    9007 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0114 02:32:02.004763    9007 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0114 02:32:02.004768    9007 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 02:32:02.004772    9007 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0114 02:32:02.006811    9007 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0114 02:32:02.006828    9007 docker.go:543] Images already preloaded, skipping extraction
	I0114 02:32:02.006917    9007 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 02:32:02.028462    9007 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I0114 02:32:02.028480    9007 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I0114 02:32:02.028484    9007 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I0114 02:32:02.028490    9007 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I0114 02:32:02.028494    9007 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I0114 02:32:02.028499    9007 command_runner.go:130] > registry.k8s.io/pause:3.8
	I0114 02:32:02.028505    9007 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I0114 02:32:02.028510    9007 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0114 02:32:02.028516    9007 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0114 02:32:02.028520    9007 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 02:32:02.028524    9007 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0114 02:32:02.030492    9007 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0114 02:32:02.030510    9007 cache_images.go:84] Images are preloaded, skipping loading
	I0114 02:32:02.030604    9007 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 02:32:02.099895    9007 command_runner.go:130] > systemd
	I0114 02:32:02.102214    9007 cni.go:95] Creating CNI manager for ""
	I0114 02:32:02.102227    9007 cni.go:156] 3 nodes found, recommending kindnet
	I0114 02:32:02.102247    9007 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 02:32:02.102270    9007 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-022829 NodeName:multinode-022829 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 02:32:02.102392    9007 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-022829"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 02:32:02.102473    9007 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-022829 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-022829 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 02:32:02.102544    9007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 02:32:02.110081    9007 command_runner.go:130] > kubeadm
	I0114 02:32:02.110094    9007 command_runner.go:130] > kubectl
	I0114 02:32:02.110098    9007 command_runner.go:130] > kubelet
	I0114 02:32:02.110759    9007 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 02:32:02.110819    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 02:32:02.118050    9007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (478 bytes)
	I0114 02:32:02.130652    9007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 02:32:02.143152    9007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2038 bytes)
	I0114 02:32:02.156163    9007 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0114 02:32:02.160145    9007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:32:02.169777    9007 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829 for IP: 192.168.58.2
	I0114 02:32:02.169903    9007 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
	I0114 02:32:02.169967    9007 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
	I0114 02:32:02.170060    9007 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.key
	I0114 02:32:02.170133    9007 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/apiserver.key.cee25041
	I0114 02:32:02.170197    9007 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/proxy-client.key
	I0114 02:32:02.170206    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0114 02:32:02.170240    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0114 02:32:02.170286    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0114 02:32:02.170313    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0114 02:32:02.170335    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0114 02:32:02.170358    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0114 02:32:02.170379    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0114 02:32:02.170401    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0114 02:32:02.170505    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
	W0114 02:32:02.170552    9007 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
	I0114 02:32:02.170568    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 02:32:02.170606    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
	I0114 02:32:02.170646    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
	I0114 02:32:02.170683    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
	I0114 02:32:02.170760    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:32:02.170795    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem -> /usr/share/ca-certificates/2728.pem
	I0114 02:32:02.170819    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> /usr/share/ca-certificates/27282.pem
	I0114 02:32:02.170840    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:02.171314    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 02:32:02.188549    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0114 02:32:02.205624    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 02:32:02.223088    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0114 02:32:02.240397    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 02:32:02.257233    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 02:32:02.274184    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 02:32:02.291418    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 02:32:02.308716    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
	I0114 02:32:02.325976    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
	I0114 02:32:02.343388    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 02:32:02.360749    9007 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 02:32:02.373779    9007 ssh_runner.go:195] Run: openssl version
	I0114 02:32:02.379031    9007 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0114 02:32:02.379294    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 02:32:02.387492    9007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:02.391289    9007 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:02.391348    9007 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:02.391406    9007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:02.396338    9007 command_runner.go:130] > b5213941
	I0114 02:32:02.396658    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 02:32:02.404197    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
	I0114 02:32:02.412245    9007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
	I0114 02:32:02.416188    9007 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 02:32:02.416313    9007 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 02:32:02.416362    9007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
	I0114 02:32:02.422101    9007 command_runner.go:130] > 51391683
	I0114 02:32:02.422158    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
	I0114 02:32:02.430041    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
	I0114 02:32:02.438087    9007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
	I0114 02:32:02.442012    9007 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 02:32:02.442132    9007 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 02:32:02.442182    9007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
	I0114 02:32:02.447342    9007 command_runner.go:130] > 3ec20f2e
	I0114 02:32:02.447713    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 02:32:02.455050    9007 kubeadm.go:396] StartCluster: {Name:multinode-022829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-022829 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:fals
e metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:32:02.455190    9007 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 02:32:02.477968    9007 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 02:32:02.492839    9007 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0114 02:32:02.492849    9007 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0114 02:32:02.492853    9007 command_runner.go:130] > /var/lib/minikube/etcd:
	I0114 02:32:02.492856    9007 command_runner.go:130] > member
	I0114 02:32:02.493474    9007 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 02:32:02.493491    9007 kubeadm.go:627] restartCluster start
	I0114 02:32:02.493551    9007 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 02:32:02.500525    9007 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:02.500605    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:02.558787    9007 kubeconfig.go:135] verify returned: extract IP: "multinode-022829" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:32:02.558877    9007 kubeconfig.go:146] "multinode-022829" context is missing from /Users/jenkins/minikube-integration/15642-1559/kubeconfig - will repair!
	I0114 02:32:02.559115    9007 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/kubeconfig: {Name:mkb6d1db5780815291441dc67b348461b9325651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:32:02.559621    9007 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:32:02.559849    9007 kapi.go:59] client config for multinode-022829: &rest.Config{Host:"https://127.0.0.1:51427", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 02:32:02.560211    9007 cert_rotation.go:137] Starting client certificate rotation controller
	I0114 02:32:02.560439    9007 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 02:32:02.568349    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:02.568421    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:02.576901    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:02.778966    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:02.779112    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:02.790114    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:02.979005    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:02.979221    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:02.990221    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:03.178583    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:03.178707    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:03.189738    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:03.379068    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:03.379257    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:03.390013    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:03.579061    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:03.579258    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:03.590144    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:03.779045    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:03.779211    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:03.790143    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:03.979018    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:03.979177    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:03.990176    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:04.177580    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:04.177785    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:04.188986    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:04.378536    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:04.378692    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:04.389475    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:04.578999    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:04.579157    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:04.590054    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:04.778876    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:04.779061    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:04.790083    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:04.977094    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:04.977269    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:04.987455    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.178484    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:05.178636    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:05.189676    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.378821    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:05.378945    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:05.390083    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.579054    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:05.579232    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:05.590456    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.590467    9007 api_server.go:165] Checking apiserver status ...
	I0114 02:32:05.590523    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 02:32:05.598809    9007 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.598822    9007 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0114 02:32:05.598832    9007 kubeadm.go:1114] stopping kube-system containers ...
	I0114 02:32:05.598910    9007 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 02:32:05.620895    9007 command_runner.go:130] > 22dfc551af5e
	I0114 02:32:05.620910    9007 command_runner.go:130] > 2b17c5d2929a
	I0114 02:32:05.620915    9007 command_runner.go:130] > 4b8fe186dcad
	I0114 02:32:05.620919    9007 command_runner.go:130] > 2d0bd2f67f63
	I0114 02:32:05.620923    9007 command_runner.go:130] > 85252b069649
	I0114 02:32:05.620929    9007 command_runner.go:130] > ed7a47472cbc
	I0114 02:32:05.620934    9007 command_runner.go:130] > ed5ada705cee
	I0114 02:32:05.620938    9007 command_runner.go:130] > a7ee261cbfc6
	I0114 02:32:05.620942    9007 command_runner.go:130] > d3ae0d142c8f
	I0114 02:32:05.620946    9007 command_runner.go:130] > 9048785f4e90
	I0114 02:32:05.620950    9007 command_runner.go:130] > 516991d5f2e5
	I0114 02:32:05.620953    9007 command_runner.go:130] > 22eb2357fc11
	I0114 02:32:05.620957    9007 command_runner.go:130] > 32c139aa3617
	I0114 02:32:05.620962    9007 command_runner.go:130] > 88473b6a518e
	I0114 02:32:05.620966    9007 command_runner.go:130] > 037848e173d9
	I0114 02:32:05.620969    9007 command_runner.go:130] > 2da5274a0541
	I0114 02:32:05.623085    9007 docker.go:444] Stopping containers: [22dfc551af5e 2b17c5d2929a 4b8fe186dcad 2d0bd2f67f63 85252b069649 ed7a47472cbc ed5ada705cee a7ee261cbfc6 d3ae0d142c8f 9048785f4e90 516991d5f2e5 22eb2357fc11 32c139aa3617 88473b6a518e 037848e173d9 2da5274a0541]
	I0114 02:32:05.623181    9007 ssh_runner.go:195] Run: docker stop 22dfc551af5e 2b17c5d2929a 4b8fe186dcad 2d0bd2f67f63 85252b069649 ed7a47472cbc ed5ada705cee a7ee261cbfc6 d3ae0d142c8f 9048785f4e90 516991d5f2e5 22eb2357fc11 32c139aa3617 88473b6a518e 037848e173d9 2da5274a0541
	I0114 02:32:05.643649    9007 command_runner.go:130] > 22dfc551af5e
	I0114 02:32:05.643665    9007 command_runner.go:130] > 2b17c5d2929a
	I0114 02:32:05.643833    9007 command_runner.go:130] > 4b8fe186dcad
	I0114 02:32:05.643877    9007 command_runner.go:130] > 2d0bd2f67f63
	I0114 02:32:05.645160    9007 command_runner.go:130] > 85252b069649
	I0114 02:32:05.645170    9007 command_runner.go:130] > ed7a47472cbc
	I0114 02:32:05.645175    9007 command_runner.go:130] > ed5ada705cee
	I0114 02:32:05.645181    9007 command_runner.go:130] > a7ee261cbfc6
	I0114 02:32:05.645416    9007 command_runner.go:130] > d3ae0d142c8f
	I0114 02:32:05.645433    9007 command_runner.go:130] > 9048785f4e90
	I0114 02:32:05.645438    9007 command_runner.go:130] > 516991d5f2e5
	I0114 02:32:05.645441    9007 command_runner.go:130] > 22eb2357fc11
	I0114 02:32:05.645445    9007 command_runner.go:130] > 32c139aa3617
	I0114 02:32:05.645455    9007 command_runner.go:130] > 88473b6a518e
	I0114 02:32:05.645460    9007 command_runner.go:130] > 037848e173d9
	I0114 02:32:05.645466    9007 command_runner.go:130] > 2da5274a0541
	I0114 02:32:05.647912    9007 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 02:32:05.658263    9007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 02:32:05.665717    9007 command_runner.go:130] > -rw------- 1 root root 5639 Jan 14 10:28 /etc/kubernetes/admin.conf
	I0114 02:32:05.665729    9007 command_runner.go:130] > -rw------- 1 root root 5652 Jan 14 10:28 /etc/kubernetes/controller-manager.conf
	I0114 02:32:05.665735    9007 command_runner.go:130] > -rw------- 1 root root 2003 Jan 14 10:28 /etc/kubernetes/kubelet.conf
	I0114 02:32:05.665746    9007 command_runner.go:130] > -rw------- 1 root root 5600 Jan 14 10:28 /etc/kubernetes/scheduler.conf
	I0114 02:32:05.666477    9007 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan 14 10:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 14 10:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Jan 14 10:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan 14 10:28 /etc/kubernetes/scheduler.conf
	
	I0114 02:32:05.666551    9007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 02:32:05.673232    9007 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0114 02:32:05.673937    9007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 02:32:05.680705    9007 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0114 02:32:05.681415    9007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 02:32:05.688663    9007 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.688721    9007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0114 02:32:05.695729    9007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 02:32:05.703047    9007 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:32:05.703108    9007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0114 02:32:05.710086    9007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 02:32:05.717668    9007 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 02:32:05.717679    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:32:05.760869    9007 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 02:32:05.760884    9007 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0114 02:32:05.761142    9007 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0114 02:32:05.761326    9007 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 02:32:05.761651    9007 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0114 02:32:05.761821    9007 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0114 02:32:05.762185    9007 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0114 02:32:05.762198    9007 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0114 02:32:05.762578    9007 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0114 02:32:05.762935    9007 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 02:32:05.763052    9007 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 02:32:05.763059    9007 command_runner.go:130] > [certs] Using the existing "sa" key
	I0114 02:32:05.766233    9007 command_runner.go:130] ! W0114 10:32:05.756257    1173 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:05.766254    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:32:05.809504    9007 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 02:32:06.068011    9007 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0114 02:32:06.200991    9007 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0114 02:32:06.467532    9007 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 02:32:06.554303    9007 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 02:32:06.557987    9007 command_runner.go:130] ! W0114 10:32:05.804795    1183 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:06.558009    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:32:06.611817    9007 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 02:32:06.612446    9007 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 02:32:06.612456    9007 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0114 02:32:06.687580    9007 command_runner.go:130] ! W0114 10:32:06.597122    1205 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:06.687601    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:32:06.730619    9007 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 02:32:06.730634    9007 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 02:32:06.732486    9007 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 02:32:06.733461    9007 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 02:32:06.738948    9007 command_runner.go:130] ! W0114 10:32:06.725493    1239 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:06.738970    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:32:06.831207    9007 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 02:32:06.835831    9007 command_runner.go:130] ! W0114 10:32:06.826277    1255 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:06.835863    9007 api_server.go:51] waiting for apiserver process to appear ...
	I0114 02:32:06.835933    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:32:07.347441    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:32:07.845931    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:32:07.857137    9007 command_runner.go:130] > 1733
	I0114 02:32:07.857172    9007 api_server.go:71] duration metric: took 1.021311066s to wait for apiserver process to appear ...
	I0114 02:32:07.857182    9007 api_server.go:87] waiting for apiserver healthz status ...
	I0114 02:32:07.857195    9007 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51427/healthz ...
	I0114 02:32:07.858851    9007 api_server.go:268] stopped: https://127.0.0.1:51427/healthz: Get "https://127.0.0.1:51427/healthz": EOF
	I0114 02:32:08.358954    9007 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51427/healthz ...
	I0114 02:32:10.934069    9007 api_server.go:278] https://127.0.0.1:51427/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0114 02:32:10.934083    9007 api_server.go:102] status: https://127.0.0.1:51427/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0114 02:32:11.360467    9007 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51427/healthz ...
	I0114 02:32:11.368724    9007 api_server.go:278] https://127.0.0.1:51427/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 02:32:11.368738    9007 api_server.go:102] status: https://127.0.0.1:51427/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 02:32:11.860518    9007 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51427/healthz ...
	I0114 02:32:11.867742    9007 api_server.go:278] https://127.0.0.1:51427/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 02:32:11.867757    9007 api_server.go:102] status: https://127.0.0.1:51427/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 02:32:12.360376    9007 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51427/healthz ...
	I0114 02:32:12.368106    9007 api_server.go:278] https://127.0.0.1:51427/healthz returned 200:
	ok
	I0114 02:32:12.368162    9007 round_trippers.go:463] GET https://127.0.0.1:51427/version
	I0114 02:32:12.368168    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:12.368177    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:12.368183    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:12.374153    9007 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0114 02:32:12.374163    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:12.374169    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:12.374173    9007 round_trippers.go:580]     Content-Length: 263
	I0114 02:32:12.374179    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:12 GMT
	I0114 02:32:12.374183    9007 round_trippers.go:580]     Audit-Id: 5896713a-e9ab-4b3d-b23b-4895d0f821f3
	I0114 02:32:12.374189    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:12.374194    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:12.374199    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:12.374220    9007 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0114 02:32:12.374268    9007 api_server.go:140] control plane version: v1.25.3
	I0114 02:32:12.374276    9007 api_server.go:130] duration metric: took 4.517078938s to wait for apiserver health ...
	I0114 02:32:12.374282    9007 cni.go:95] Creating CNI manager for ""
	I0114 02:32:12.374287    9007 cni.go:156] 3 nodes found, recommending kindnet
	I0114 02:32:12.412445    9007 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0114 02:32:12.433914    9007 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0114 02:32:12.440421    9007 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0114 02:32:12.440434    9007 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0114 02:32:12.440440    9007 command_runner.go:130] > Device: 8eh/142d	Inode: 1184766     Links: 1
	I0114 02:32:12.440445    9007 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0114 02:32:12.440450    9007 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0114 02:32:12.440454    9007 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0114 02:32:12.440458    9007 command_runner.go:130] > Change: 2023-01-14 10:06:30.247481244 +0000
	I0114 02:32:12.440466    9007 command_runner.go:130] >  Birth: -
	I0114 02:32:12.440512    9007 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0114 02:32:12.440518    9007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0114 02:32:12.454524    9007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 02:32:13.447772    9007 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0114 02:32:13.449556    9007 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0114 02:32:13.452148    9007 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0114 02:32:13.527275    9007 command_runner.go:130] > daemonset.apps/kindnet configured
	I0114 02:32:13.537707    9007 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.083156211s)
	I0114 02:32:13.537742    9007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 02:32:13.537844    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:13.537854    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:13.537864    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:13.537873    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:13.542700    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:13.542720    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:13.542728    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:13.542736    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:13.542748    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:13 GMT
	I0114 02:32:13.542770    9007 round_trippers.go:580]     Audit-Id: 46d51b2d-b1cd-41d4-9031-c062220a458a
	I0114 02:32:13.542802    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:13.542813    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:13.544298    9007 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"719"},"items":[{"metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"415","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84181 chars]
	I0114 02:32:13.547374    9007 system_pods.go:59] 12 kube-system pods found
	I0114 02:32:13.547393    9007 system_pods.go:61] "coredns-565d847f94-xg88j" [8ba9cbef-253e-46ad-aa78-55875dc5939b] Running
	I0114 02:32:13.547399    9007 system_pods.go:61] "etcd-multinode-022829" [f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 02:32:13.547403    9007 system_pods.go:61] "kindnet-2ffw5" [6e2e34df-4259-4f9d-a1d8-b7c33a252211] Running
	I0114 02:32:13.547406    9007 system_pods.go:61] "kindnet-crlwb" [129cddf5-10fc-4467-ab9d-d9a47d195213] Running
	I0114 02:32:13.547410    9007 system_pods.go:61] "kindnet-pqh2t" [cb280495-e617-461c-a259-e28b47f301d6] Running
	I0114 02:32:13.547414    9007 system_pods.go:61] "kube-apiserver-multinode-022829" [b153813e-4767-4643-9cc4-ab5c1f8a2441] Running
	I0114 02:32:13.547420    9007 system_pods.go:61] "kube-controller-manager-multinode-022829" [3ecd3fea-11b6-4dd0-9ac1-200f293b0e22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0114 02:32:13.547424    9007 system_pods.go:61] "kube-proxy-6bgqj" [330a14fa-1ce0-4857-81a1-2988087382d4] Running
	I0114 02:32:13.547428    9007 system_pods.go:61] "kube-proxy-7p92j" [abe462b8-5607-4e29-b040-12678d7ec756] Running
	I0114 02:32:13.547432    9007 system_pods.go:61] "kube-proxy-pplrc" [f6acf6b8-0d1e-4694-85de-f70fb0bcfee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0114 02:32:13.547437    9007 system_pods.go:61] "kube-scheduler-multinode-022829" [dec76631-6f7c-433f-87e4-2d0c847b6f29] Running
	I0114 02:32:13.547441    9007 system_pods.go:61] "storage-provisioner" [29960f5b-1391-43dd-9ebb-93c76a894fa2] Running
	I0114 02:32:13.547445    9007 system_pods.go:74] duration metric: took 9.69295ms to wait for pod list to return data ...
	I0114 02:32:13.547452    9007 node_conditions.go:102] verifying NodePressure condition ...
	I0114 02:32:13.547492    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes
	I0114 02:32:13.547497    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:13.547503    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:13.547509    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:13.550581    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:13.550595    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:13.550601    9007 round_trippers.go:580]     Audit-Id: 9021e032-5f34-4678-9320-d3b8bfded768
	I0114 02:32:13.550606    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:13.550611    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:13.550616    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:13.550621    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:13.550625    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:13 GMT
	I0114 02:32:13.550802    9007 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"719"},"items":[{"metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16143 chars]
	I0114 02:32:13.551427    9007 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:32:13.551441    9007 node_conditions.go:123] node cpu capacity is 6
	I0114 02:32:13.551450    9007 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:32:13.551453    9007 node_conditions.go:123] node cpu capacity is 6
	I0114 02:32:13.551458    9007 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:32:13.551461    9007 node_conditions.go:123] node cpu capacity is 6
	I0114 02:32:13.551467    9007 node_conditions.go:105] duration metric: took 4.012324ms to run NodePressure ...
	I0114 02:32:13.551480    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:32:13.844983    9007 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0114 02:32:13.948973    9007 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0114 02:32:13.952425    9007 command_runner.go:130] ! W0114 10:32:13.658132    2437 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:13.952459    9007 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0114 02:32:13.952523    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0114 02:32:13.952532    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:13.952541    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:13.952549    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:13.956283    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:13.956302    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:13.956310    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:13.956316    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:13 GMT
	I0114 02:32:13.956322    9007 round_trippers.go:580]     Audit-Id: a7d51ca1-f62e-48ff-b775-d61ae92021ad
	I0114 02:32:13.956329    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:13.956334    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:13.956339    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:13.956568    9007 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"724"},"items":[{"metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30919 chars]
	I0114 02:32:13.957509    9007 kubeadm.go:778] kubelet initialised
	I0114 02:32:13.957522    9007 kubeadm.go:779] duration metric: took 5.053909ms waiting for restarted kubelet to initialise ...
	I0114 02:32:13.957529    9007 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 02:32:13.957574    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:13.957580    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:13.957587    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:13.957595    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.019403    9007 round_trippers.go:574] Response Status: 200 OK in 61 milliseconds
	I0114 02:32:14.019437    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.019452    9007 round_trippers.go:580]     Audit-Id: 2b678049-9fa9-4318-a6e1-2bf3b099cbb1
	I0114 02:32:14.019467    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.019476    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.019481    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.019486    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.019491    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.021078    9007 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"724"},"items":[{"metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"415","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84632 chars]
	I0114 02:32:14.023177    9007 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-xg88j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:14.023233    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/coredns-565d847f94-xg88j
	I0114 02:32:14.023244    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:14.023252    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.023257    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:14.026394    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:14.026409    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.026419    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.026425    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.026431    9007 round_trippers.go:580]     Audit-Id: 4204a8d1-cc18-4161-b8ff-75ad78931a9a
	I0114 02:32:14.026436    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.026441    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.026446    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.026506    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"415","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6343 chars]
	I0114 02:32:14.026782    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:14.026788    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:14.026795    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.026800    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:14.029356    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:14.029368    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.029375    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.029380    9007 round_trippers.go:580]     Audit-Id: a54c14b9-c397-4da1-9756-cd33fcc66791
	I0114 02:32:14.029385    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.029390    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.029395    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.029400    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.029457    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:14.029649    9007 pod_ready.go:92] pod "coredns-565d847f94-xg88j" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:14.029655    9007 pod_ready.go:81] duration metric: took 6.465458ms waiting for pod "coredns-565d847f94-xg88j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:14.029662    9007 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:14.029690    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:14.029695    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:14.029701    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.029708    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:14.032046    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:14.032056    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.032062    9007 round_trippers.go:580]     Audit-Id: 0d5ef7ae-aa62-48de-b92a-01e09f0eb750
	I0114 02:32:14.032067    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.032072    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.032077    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.032082    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.032087    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.032258    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:14.032493    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:14.032500    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:14.032506    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.032512    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:14.034705    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:14.034714    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.034720    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.034726    9007 round_trippers.go:580]     Audit-Id: f9e49e86-cb93-4d33-92cd-0ad5c27a5bb4
	I0114 02:32:14.034734    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.034739    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.034744    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.034748    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.034808    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:14.535351    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:14.535364    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:14.535371    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.535376    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:14.538562    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:14.538576    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.538592    9007 round_trippers.go:580]     Audit-Id: d0de57d6-f113-47aa-b915-e91c98eaca6f
	I0114 02:32:14.538598    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.538605    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.538611    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.538615    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.538621    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.538928    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:14.539204    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:14.539213    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:14.539220    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:14.539225    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:14.541382    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:14.541393    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:14.541398    9007 round_trippers.go:580]     Audit-Id: ff495af3-2e24-419f-8058-93577f6e8b23
	I0114 02:32:14.541403    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:14.541409    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:14.541414    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:14.541420    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:14.541424    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:14 GMT
	I0114 02:32:14.541485    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:15.035495    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:15.035515    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:15.035525    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:15.035533    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:15.038632    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:15.038652    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:15.038660    9007 round_trippers.go:580]     Audit-Id: f2c06345-381c-4af2-84cd-11f6f9c43967
	I0114 02:32:15.038665    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:15.038670    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:15.038674    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:15.038678    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:15.038683    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:15 GMT
	I0114 02:32:15.038742    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:15.039027    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:15.039034    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:15.039039    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:15.039045    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:15.041117    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:15.041129    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:15.041134    9007 round_trippers.go:580]     Audit-Id: 920c4cd5-5b2b-4e06-a7dc-cc7ef0d28a35
	I0114 02:32:15.041140    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:15.041145    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:15.041149    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:15.041155    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:15.041159    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:15 GMT
	I0114 02:32:15.041260    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:15.535888    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:15.535907    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:15.535920    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:15.535930    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:15.539553    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:15.539567    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:15.539575    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:15 GMT
	I0114 02:32:15.539582    9007 round_trippers.go:580]     Audit-Id: cf0865a0-4e3d-46f4-b169-bf203962820c
	I0114 02:32:15.539603    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:15.539608    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:15.539615    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:15.539626    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:15.539736    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:15.539996    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:15.540002    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:15.540008    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:15.540013    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:15.542235    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:15.542244    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:15.542250    9007 round_trippers.go:580]     Audit-Id: 53ee419d-354a-498b-bfcb-1365fca3aecf
	I0114 02:32:15.542254    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:15.542260    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:15.542265    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:15.542270    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:15.542275    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:15 GMT
	I0114 02:32:15.542335    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:16.037129    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:16.037150    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:16.037163    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:16.037173    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:16.041251    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:16.041263    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:16.041302    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:16.041315    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:16 GMT
	I0114 02:32:16.041324    9007 round_trippers.go:580]     Audit-Id: 590bc0a0-3327-4208-900e-b0fdfc7c0822
	I0114 02:32:16.041330    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:16.041336    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:16.041341    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:16.041407    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:16.041674    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:16.041682    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:16.041688    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:16.041693    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:16.044367    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:16.044381    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:16.044387    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:16.044393    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:16 GMT
	I0114 02:32:16.044401    9007 round_trippers.go:580]     Audit-Id: 52a54fe8-4eca-4388-8e92-adc98141a52b
	I0114 02:32:16.044406    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:16.044416    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:16.044421    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:16.044497    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:16.044714    9007 pod_ready.go:102] pod "etcd-multinode-022829" in "kube-system" namespace has status "Ready":"False"
	I0114 02:32:16.537177    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:16.537210    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:16.537254    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:16.537307    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:16.541370    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:16.541388    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:16.541397    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:16.541405    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:16.541413    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:16.541417    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:16 GMT
	I0114 02:32:16.541422    9007 round_trippers.go:580]     Audit-Id: 7fb1e0d9-9113-4529-b3e5-0f0e31915db9
	I0114 02:32:16.541429    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:16.541753    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:16.542009    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:16.542017    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:16.542023    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:16.542029    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:16.544157    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:16.544167    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:16.544173    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:16.544179    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:16 GMT
	I0114 02:32:16.544187    9007 round_trippers.go:580]     Audit-Id: 2c3d1aaa-69de-4fae-9100-50a3e764ce54
	I0114 02:32:16.544193    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:16.544199    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:16.544203    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:16.544255    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:17.035265    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:17.035279    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:17.035288    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:17.035296    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:17.039043    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:17.039059    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:17.039065    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:17.039070    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:17.039077    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:17 GMT
	I0114 02:32:17.039082    9007 round_trippers.go:580]     Audit-Id: 0521d5e4-bcd6-4b5f-beeb-fea5bd9b0314
	I0114 02:32:17.039093    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:17.039099    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:17.039198    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:17.039576    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:17.039585    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:17.039593    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:17.039600    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:17.043512    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:17.043527    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:17.043534    9007 round_trippers.go:580]     Audit-Id: 9f6d7fdc-a1af-4e69-8740-04c1dc5634da
	I0114 02:32:17.043538    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:17.043543    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:17.043548    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:17.043554    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:17.043560    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:17 GMT
	I0114 02:32:17.043655    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:17.537226    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:17.537280    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:17.537298    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:17.537315    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:17.540826    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:17.540840    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:17.540849    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:17 GMT
	I0114 02:32:17.540854    9007 round_trippers.go:580]     Audit-Id: 7ac38a0e-97e4-418a-a1a1-cf25e4b9ca9c
	I0114 02:32:17.540859    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:17.540864    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:17.540869    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:17.540875    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:17.540943    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:17.541229    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:17.541236    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:17.541242    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:17.541247    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:17.543719    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:17.543729    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:17.543734    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:17.543740    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:17 GMT
	I0114 02:32:17.543751    9007 round_trippers.go:580]     Audit-Id: 6ec51d42-c466-453b-bb9d-b681cc7906b3
	I0114 02:32:17.543758    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:17.543765    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:17.543771    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:17.543843    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:18.035137    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:18.035151    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:18.035157    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:18.035162    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:18.037857    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:18.037871    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:18.037878    9007 round_trippers.go:580]     Audit-Id: f33bfc33-2a3c-4ef0-9b99-b21a8591196a
	I0114 02:32:18.037884    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:18.037893    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:18.037899    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:18.037904    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:18.037910    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:18 GMT
	I0114 02:32:18.037993    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:18.038273    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:18.038281    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:18.038287    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:18.038292    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:18.040616    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:18.040629    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:18.040635    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:18.040640    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:18.040645    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:18.040649    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:18.040655    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:18 GMT
	I0114 02:32:18.040660    9007 round_trippers.go:580]     Audit-Id: 04efb013-ef94-44fe-aa34-ab751fa14ed6
	I0114 02:32:18.040843    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:18.535744    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:18.535771    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:18.535834    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:18.535847    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:18.539686    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:18.539696    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:18.539702    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:18 GMT
	I0114 02:32:18.539707    9007 round_trippers.go:580]     Audit-Id: 848d4d86-753f-4624-8361-e68a6e4668f7
	I0114 02:32:18.539712    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:18.539720    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:18.539726    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:18.539731    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:18.539786    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:18.540042    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:18.540050    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:18.540057    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:18.540062    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:18.542227    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:18.542238    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:18.542244    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:18.542268    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:18 GMT
	I0114 02:32:18.542283    9007 round_trippers.go:580]     Audit-Id: f28ff3d9-f6ae-4594-acaf-7eca84ea5170
	I0114 02:32:18.542291    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:18.542298    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:18.542307    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:18.542449    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:18.542640    9007 pod_ready.go:102] pod "etcd-multinode-022829" in "kube-system" namespace has status "Ready":"False"
	I0114 02:32:19.035312    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:19.035340    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:19.035353    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:19.035364    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:19.039298    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:19.039313    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:19.039320    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:19 GMT
	I0114 02:32:19.039326    9007 round_trippers.go:580]     Audit-Id: 7b06b2de-634d-4023-99bd-78d7b75dcce1
	I0114 02:32:19.039331    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:19.039338    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:19.039343    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:19.039348    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:19.039401    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:19.039648    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:19.039655    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:19.039661    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:19.039666    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:19.041897    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:19.041906    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:19.041914    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:19.041920    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:19 GMT
	I0114 02:32:19.041925    9007 round_trippers.go:580]     Audit-Id: 55ac49b6-56ec-46d1-adf7-06496db18c67
	I0114 02:32:19.041930    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:19.041935    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:19.041939    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:19.042018    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:19.535210    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:19.535228    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:19.535237    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:19.535245    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:19.538480    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:19.538491    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:19.538498    9007 round_trippers.go:580]     Audit-Id: 2be90e77-6930-46d3-8f24-1a922ea051b5
	I0114 02:32:19.538503    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:19.538508    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:19.538513    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:19.538518    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:19.538523    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:19 GMT
	I0114 02:32:19.538579    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"699","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6269 chars]
	I0114 02:32:19.538836    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:19.538843    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:19.538849    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:19.538854    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:19.540841    9007 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 02:32:19.540851    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:19.540859    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:19.540865    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:19.540871    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:19.540875    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:19.540881    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:19 GMT
	I0114 02:32:19.540885    9007 round_trippers.go:580]     Audit-Id: 2fcfcf32-764d-4e38-b6c9-0814ad583fc9
	I0114 02:32:19.540941    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:20.035281    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:20.035304    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:20.035317    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:20.035328    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:20.039094    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:20.039109    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:20.039116    9007 round_trippers.go:580]     Audit-Id: 1f26eaac-7d74-499e-b089-3feac61ef623
	I0114 02:32:20.039126    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:20.039131    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:20.039135    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:20.039140    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:20.039145    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:20 GMT
	I0114 02:32:20.039208    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"765","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6045 chars]
	I0114 02:32:20.039456    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:20.039463    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:20.039469    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:20.039474    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:20.041659    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:20.041667    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:20.041673    9007 round_trippers.go:580]     Audit-Id: 53852745-3747-4303-a3c3-400a9e4d0aa4
	I0114 02:32:20.041678    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:20.041683    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:20.041688    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:20.041693    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:20.041698    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:20 GMT
	I0114 02:32:20.041765    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:20.041941    9007 pod_ready.go:92] pod "etcd-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:20.041951    9007 pod_ready.go:81] duration metric: took 6.012270896s waiting for pod "etcd-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:20.041961    9007 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:20.041986    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:20.041991    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:20.041997    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:20.042002    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:20.044280    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:20.044289    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:20.044295    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:20.044301    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:20 GMT
	I0114 02:32:20.044306    9007 round_trippers.go:580]     Audit-Id: 789be0dd-0966-4d1c-8e23-3be8718b5fb4
	I0114 02:32:20.044311    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:20.044316    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:20.044322    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:20.044412    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:20.044673    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:20.044679    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:20.044685    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:20.044691    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:20.046776    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:20.046785    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:20.046791    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:20.046796    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:20.046802    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:20 GMT
	I0114 02:32:20.046806    9007 round_trippers.go:580]     Audit-Id: d4a08213-918a-41e3-a1d4-6054b544f39b
	I0114 02:32:20.046812    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:20.046817    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:20.046861    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:20.547663    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:20.547683    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:20.547696    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:20.547706    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:20.551816    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:20.551827    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:20.551833    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:20.551838    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:20.551844    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:20.551848    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:20.551854    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:20 GMT
	I0114 02:32:20.551858    9007 round_trippers.go:580]     Audit-Id: a0481b75-e34f-40b7-a671-654a0bf1f81d
	I0114 02:32:20.551971    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:20.552335    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:20.552349    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:20.552358    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:20.552366    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:20.554685    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:20.554694    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:20.554700    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:20.554705    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:20.554711    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:20 GMT
	I0114 02:32:20.554716    9007 round_trippers.go:580]     Audit-Id: 81dc5e18-61f3-4c57-b44e-0fce7d654bbc
	I0114 02:32:20.554725    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:20.554730    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:20.554797    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:21.047487    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:21.047508    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:21.047521    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:21.047531    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:21.051457    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:21.051471    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:21.051479    9007 round_trippers.go:580]     Audit-Id: 6dcd3831-9996-4aba-940e-a68b406fdb53
	I0114 02:32:21.051486    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:21.051494    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:21.051501    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:21.051506    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:21.051512    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:21 GMT
	I0114 02:32:21.052072    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:21.052370    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:21.052379    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:21.052385    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:21.052390    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:21.054586    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:21.054597    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:21.054603    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:21 GMT
	I0114 02:32:21.054608    9007 round_trippers.go:580]     Audit-Id: 29507011-5709-49c9-813a-57b4474681dc
	I0114 02:32:21.054613    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:21.054618    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:21.054623    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:21.054628    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:21.054674    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:21.549250    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:21.549275    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:21.549307    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:21.549319    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:21.553434    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:21.553449    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:21.553456    9007 round_trippers.go:580]     Audit-Id: 8edeca49-52a7-400d-94f6-0c3e892f3f26
	I0114 02:32:21.553463    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:21.553470    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:21.553477    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:21.553484    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:21.553492    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:21 GMT
	I0114 02:32:21.553595    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:21.553904    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:21.553910    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:21.553916    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:21.553921    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:21.556246    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:21.556256    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:21.556263    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:21 GMT
	I0114 02:32:21.556269    9007 round_trippers.go:580]     Audit-Id: 2a023f3e-44ab-49d5-a6e1-28598aed2ce2
	I0114 02:32:21.556274    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:21.556279    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:21.556283    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:21.556288    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:21.556637    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:22.049299    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:22.049321    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:22.049334    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:22.049345    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:22.053720    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:22.053732    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:22.053738    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:22.053743    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:22.053749    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:22 GMT
	I0114 02:32:22.053753    9007 round_trippers.go:580]     Audit-Id: c62392cd-868e-4bbc-bf0b-7f8ddf33c0c3
	I0114 02:32:22.053759    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:22.053764    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:22.053853    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:22.054166    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:22.054174    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:22.054182    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:22.054190    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:22.056206    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:22.056215    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:22.056221    9007 round_trippers.go:580]     Audit-Id: 66b11b91-0714-43d0-a265-dfbbe472030e
	I0114 02:32:22.056227    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:22.056233    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:22.056237    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:22.056243    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:22.056247    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:22 GMT
	I0114 02:32:22.056290    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:22.056475    9007 pod_ready.go:102] pod "kube-apiserver-multinode-022829" in "kube-system" namespace has status "Ready":"False"
	I0114 02:32:22.547552    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:22.547575    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:22.547589    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:22.547600    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:22.551909    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:22.551921    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:22.551927    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:22.551938    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:22.551944    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:22.551949    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:22.551954    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:22 GMT
	I0114 02:32:22.551959    9007 round_trippers.go:580]     Audit-Id: 0cd6db79-abd6-4f93-b018-1704da588545
	I0114 02:32:22.552032    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:22.552324    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:22.552330    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:22.552336    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:22.552342    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:22.554428    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:22.554438    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:22.554444    9007 round_trippers.go:580]     Audit-Id: c114a831-cba2-4f44-9eff-fa07d320f21e
	I0114 02:32:22.554449    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:22.554455    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:22.554460    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:22.554465    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:22.554470    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:22 GMT
	I0114 02:32:22.554518    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:23.047948    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:23.047971    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:23.047984    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:23.047994    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:23.052659    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:23.052673    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:23.052680    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:23.052684    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:23.052689    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:23.052695    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 02:32:23.052700    9007 round_trippers.go:580]     Audit-Id: 41af783c-b311-4fe1-b72f-b05f03a04d14
	I0114 02:32:23.052705    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:23.052804    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:23.053092    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:23.053098    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:23.053105    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:23.053111    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:23.055221    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:23.055230    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:23.055236    9007 round_trippers.go:580]     Audit-Id: cf6fdac7-e1c0-4aaa-a131-6e06498ab0a5
	I0114 02:32:23.055242    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:23.055247    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:23.055252    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:23.055257    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:23.055263    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 02:32:23.055302    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:23.547997    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:23.548020    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:23.548034    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:23.548044    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:23.552225    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:23.552242    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:23.552250    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:23.552257    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:23.552264    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:23.552270    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:23.552278    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 02:32:23.552284    9007 round_trippers.go:580]     Audit-Id: f6ca9fff-ab05-4c0c-ac41-5ff32c7f5139
	I0114 02:32:23.552398    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:23.552706    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:23.552712    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:23.552718    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:23.552724    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:23.554888    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:23.554897    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:23.554903    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:23.554908    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:23.554913    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:23.554918    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:23.554923    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:23 GMT
	I0114 02:32:23.554927    9007 round_trippers.go:580]     Audit-Id: a6d2b574-3e09-49a4-ad4c-29a622796d61
	I0114 02:32:23.554983    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:24.048239    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:24.048254    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:24.048263    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:24.048270    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:24.051039    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:24.051049    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:24.051055    9007 round_trippers.go:580]     Audit-Id: 3089f126-ecbe-404b-92c5-31176aee2b3e
	I0114 02:32:24.051059    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:24.051065    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:24.051070    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:24.051076    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:24.051081    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 02:32:24.051156    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:24.051442    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:24.051449    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:24.051456    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:24.051463    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:24.053614    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:24.053625    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:24.053631    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:24.053636    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 02:32:24.053643    9007 round_trippers.go:580]     Audit-Id: 62d5e1c6-aa9a-46c2-972b-9f8a44b4022a
	I0114 02:32:24.053648    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:24.053653    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:24.053658    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:24.053700    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:24.547377    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:24.547398    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:24.547411    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:24.547421    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:24.551626    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:24.551639    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:24.551645    9007 round_trippers.go:580]     Audit-Id: 77037102-9134-4c35-9faa-f7a2d4386c23
	I0114 02:32:24.551651    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:24.551656    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:24.551661    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:24.551666    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:24.551671    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 02:32:24.551761    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"721","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8673 chars]
	I0114 02:32:24.552043    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:24.552050    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:24.552056    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:24.552068    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:24.554382    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:24.554391    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:24.554397    9007 round_trippers.go:580]     Audit-Id: a3db9850-5b3b-42a6-8c12-1fe4c4f2c4fa
	I0114 02:32:24.554402    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:24.554408    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:24.554412    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:24.554418    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:24.554422    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:24 GMT
	I0114 02:32:24.554469    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:24.554656    9007 pod_ready.go:102] pod "kube-apiserver-multinode-022829" in "kube-system" namespace has status "Ready":"False"
	I0114 02:32:25.047479    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:25.047500    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.047513    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.047523    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.051755    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:25.051766    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.051773    9007 round_trippers.go:580]     Audit-Id: 1c661f70-904f-4e72-a358-5045c260708f
	I0114 02:32:25.051779    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.051785    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.051790    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.051795    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.051800    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.051860    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"792","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8429 chars]
	I0114 02:32:25.052133    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:25.052140    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.052146    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.052151    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.054236    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.054245    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.054251    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.054256    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.054262    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.054266    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.054271    9007 round_trippers.go:580]     Audit-Id: 9223b7f3-f2e1-4de0-a41c-728472ba2810
	I0114 02:32:25.054276    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.054315    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:25.054497    9007 pod_ready.go:92] pod "kube-apiserver-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:25.054507    9007 pod_ready.go:81] duration metric: took 5.0125291s waiting for pod "kube-apiserver-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.054515    9007 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.054543    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-022829
	I0114 02:32:25.054547    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.054553    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.054558    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.056680    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.056691    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.056697    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.056703    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.056707    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.056713    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.056718    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.056723    9007 round_trippers.go:580]     Audit-Id: 2eac0d91-4d46-4a44-917a-99cd9ae49a3d
	I0114 02:32:25.056776    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-022829","namespace":"kube-system","uid":"3ecd3fea-11b6-4dd0-9ac1-200f293b0e22","resourceVersion":"768","creationTimestamp":"2023-01-14T10:28:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cf97c75e822bcdf884e5298e8f141a84","kubernetes.io/config.mirror":"cf97c75e822bcdf884e5298e8f141a84","kubernetes.io/config.seen":"2023-01-14T10:28:46.070561468Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8004 chars]
	I0114 02:32:25.057022    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:25.057029    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.057035    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.057054    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.059077    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.059086    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.059092    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.059097    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.059103    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.059107    9007 round_trippers.go:580]     Audit-Id: 9128f849-89a6-47e4-bd0d-31e76a4ea6e1
	I0114 02:32:25.059113    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.059117    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.059162    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:25.059339    9007 pod_ready.go:92] pod "kube-controller-manager-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:25.059346    9007 pod_ready.go:81] duration metric: took 4.825915ms waiting for pod "kube-controller-manager-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.059353    9007 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6bgqj" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.059378    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-6bgqj
	I0114 02:32:25.059383    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.059388    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.059394    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.061593    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.061602    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.061607    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.061612    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.061617    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.061622    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.061628    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.061632    9007 round_trippers.go:580]     Audit-Id: bbcbbb18-aeb7-4c51-ba3f-7757c0f401ec
	I0114 02:32:25.061671    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6bgqj","generateName":"kube-proxy-","namespace":"kube-system","uid":"330a14fa-1ce0-4857-81a1-2988087382d4","resourceVersion":"679","creationTimestamp":"2023-01-14T10:30:18Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c7ecd323-5445-4d86-89b2-536132fa201e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:30:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7ecd323-5445-4d86-89b2-536132fa201e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I0114 02:32:25.061898    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m03
	I0114 02:32:25.061904    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.061910    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.061916    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.063791    9007 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 02:32:25.063800    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.063806    9007 round_trippers.go:580]     Audit-Id: 3f51acea-b865-4233-a2ea-e37ae3a6f8b6
	I0114 02:32:25.063813    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.063817    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.063822    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.063827    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.063832    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.063979    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829-m03","uid":"24958ca9-a14e-431f-b462-d3bfbcd7c387","resourceVersion":"692","creationTimestamp":"2023-01-14T10:31:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4323 chars]
	I0114 02:32:25.064139    9007 pod_ready.go:92] pod "kube-proxy-6bgqj" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:25.064146    9007 pod_ready.go:81] duration metric: took 4.78819ms waiting for pod "kube-proxy-6bgqj" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.064152    9007 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7p92j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.064176    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-7p92j
	I0114 02:32:25.064181    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.064187    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.064193    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.065914    9007 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 02:32:25.065923    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.065929    9007 round_trippers.go:580]     Audit-Id: c37a8e69-60e7-4d7a-aff2-edb1a78973c9
	I0114 02:32:25.065934    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.065939    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.065944    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.065949    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.065954    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.066094    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7p92j","generateName":"kube-proxy-","namespace":"kube-system","uid":"abe462b8-5607-4e29-b040-12678d7ec756","resourceVersion":"473","creationTimestamp":"2023-01-14T10:29:34Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c7ecd323-5445-4d86-89b2-536132fa201e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7ecd323-5445-4d86-89b2-536132fa201e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I0114 02:32:25.066312    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m02
	I0114 02:32:25.066318    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.066324    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.066329    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.068489    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.068498    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.068505    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.068510    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.068515    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.068520    9007 round_trippers.go:580]     Audit-Id: 1f1928d4-1e06-4535-ab72-d4e91efbfb4b
	I0114 02:32:25.068526    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.068531    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.068571    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829-m02","uid":"3911d4d5-57fa-4f76-9a4f-ea2b104e8003","resourceVersion":"538","creationTimestamp":"2023-01-14T10:29:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4506 chars]
	I0114 02:32:25.068728    9007 pod_ready.go:92] pod "kube-proxy-7p92j" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:25.068735    9007 pod_ready.go:81] duration metric: took 4.577571ms waiting for pod "kube-proxy-7p92j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.068740    9007 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pplrc" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.068784    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-pplrc
	I0114 02:32:25.068789    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.068794    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.068800    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.070556    9007 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 02:32:25.070566    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.070571    9007 round_trippers.go:580]     Audit-Id: fe68db55-59f7-46eb-8b06-92e68a3e8b49
	I0114 02:32:25.070577    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.070581    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.070587    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.070592    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.070597    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.070788    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pplrc","generateName":"kube-proxy-","namespace":"kube-system","uid":"f6acf6b8-0d1e-4694-85de-f70fb0bcfee7","resourceVersion":"743","creationTimestamp":"2023-01-14T10:29:10Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c7ecd323-5445-4d86-89b2-536132fa201e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7ecd323-5445-4d86-89b2-536132fa201e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0114 02:32:25.071016    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:25.071022    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.071028    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.071034    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.072851    9007 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 02:32:25.072859    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.072864    9007 round_trippers.go:580]     Audit-Id: 95a77ecb-7495-40fd-a52d-0c3b2e3f8436
	I0114 02:32:25.072869    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.072875    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.072879    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.072885    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.072889    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.073063    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:25.073235    9007 pod_ready.go:92] pod "kube-proxy-pplrc" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:25.073241    9007 pod_ready.go:81] duration metric: took 4.496352ms waiting for pod "kube-proxy-pplrc" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.073247    9007 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.248533    9007 request.go:614] Waited for 175.226969ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022829
	I0114 02:32:25.248593    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022829
	I0114 02:32:25.248604    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.248617    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.248628    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.252665    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:25.252676    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.252681    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.252686    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.252691    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.252695    9007 round_trippers.go:580]     Audit-Id: 73869de0-69ff-4fcb-8e9b-d785d147adc8
	I0114 02:32:25.252700    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.252705    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.252794    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-022829","namespace":"kube-system","uid":"dec76631-6f7c-433f-87e4-2d0c847b6f29","resourceVersion":"781","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"802a051d3df9ed6e4a14219bcab9d87d","kubernetes.io/config.mirror":"802a051d3df9ed6e4a14219bcab9d87d","kubernetes.io/config.seen":"2023-01-14T10:28:46.070562243Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4886 chars]
	I0114 02:32:25.447985    9007 request.go:614] Waited for 194.961996ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:25.448076    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:25.448087    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.448102    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.448114    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.451648    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:25.451661    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.451669    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.451675    9007 round_trippers.go:580]     Audit-Id: b998b8bd-ec3e-49ab-868f-f9835834f4be
	I0114 02:32:25.451681    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.451687    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.451693    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.451700    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.451768    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:25.451979    9007 pod_ready.go:92] pod "kube-scheduler-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:25.451986    9007 pod_ready.go:81] duration metric: took 378.732619ms waiting for pod "kube-scheduler-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.451993    9007 pod_ready.go:38] duration metric: took 11.494429703s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 02:32:25.452004    9007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 02:32:25.460203    9007 command_runner.go:130] > -16
	I0114 02:32:25.460220    9007 ops.go:34] apiserver oom_adj: -16
	I0114 02:32:25.460249    9007 kubeadm.go:631] restartCluster took 22.966695382s
	I0114 02:32:25.460261    9007 kubeadm.go:398] StartCluster complete in 23.005164118s
	I0114 02:32:25.460275    9007 settings.go:142] acquiring lock: {Name:mka95467446367990e489ec54b84107091d6186f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:32:25.460365    9007 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:32:25.460766    9007 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/kubeconfig: {Name:mkb6d1db5780815291441dc67b348461b9325651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:32:25.461375    9007 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:32:25.461567    9007 kapi.go:59] client config for multinode-022829: &rest.Config{Host:"https://127.0.0.1:51427", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 02:32:25.461761    9007 round_trippers.go:463] GET https://127.0.0.1:51427/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0114 02:32:25.461767    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.461773    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.461778    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.464260    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.464270    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.464276    9007 round_trippers.go:580]     Content-Length: 291
	I0114 02:32:25.464281    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.464286    9007 round_trippers.go:580]     Audit-Id: 8eafcb1b-4383-47a7-8950-4032a578f9e0
	I0114 02:32:25.464291    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.464296    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.464304    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.464310    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.464322    9007 request.go:1154] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"caadf10f-dc39-47dc-8b33-5d3e20072eab","resourceVersion":"787","creationTimestamp":"2023-01-14T10:28:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0114 02:32:25.464410    9007 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-022829" rescaled to 1
	I0114 02:32:25.464440    9007 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 02:32:25.464456    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 02:32:25.464503    9007 addons.go:486] enableAddons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0114 02:32:25.488288    9007 out.go:177] * Verifying Kubernetes components...
	I0114 02:32:25.464630    9007 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:32:25.488362    9007 addons.go:65] Setting storage-provisioner=true in profile "multinode-022829"
	I0114 02:32:25.488368    9007 addons.go:65] Setting default-storageclass=true in profile "multinode-022829"
	I0114 02:32:25.530273    9007 addons.go:227] Setting addon storage-provisioner=true in "multinode-022829"
	I0114 02:32:25.530291    9007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 02:32:25.530292    9007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-022829"
	W0114 02:32:25.530295    9007 addons.go:236] addon storage-provisioner should already be in state true
	I0114 02:32:25.520819    9007 command_runner.go:130] > apiVersion: v1
	I0114 02:32:25.530319    9007 command_runner.go:130] > data:
	I0114 02:32:25.530328    9007 command_runner.go:130] >   Corefile: |
	I0114 02:32:25.530332    9007 command_runner.go:130] >     .:53 {
	I0114 02:32:25.530348    9007 command_runner.go:130] >         errors
	I0114 02:32:25.530350    9007 host.go:66] Checking if "multinode-022829" exists ...
	I0114 02:32:25.530357    9007 command_runner.go:130] >         health {
	I0114 02:32:25.530370    9007 command_runner.go:130] >            lameduck 5s
	I0114 02:32:25.530376    9007 command_runner.go:130] >         }
	I0114 02:32:25.530379    9007 command_runner.go:130] >         ready
	I0114 02:32:25.530384    9007 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0114 02:32:25.530388    9007 command_runner.go:130] >            pods insecure
	I0114 02:32:25.530392    9007 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0114 02:32:25.530396    9007 command_runner.go:130] >            ttl 30
	I0114 02:32:25.530399    9007 command_runner.go:130] >         }
	I0114 02:32:25.530404    9007 command_runner.go:130] >         prometheus :9153
	I0114 02:32:25.530407    9007 command_runner.go:130] >         hosts {
	I0114 02:32:25.530411    9007 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I0114 02:32:25.530417    9007 command_runner.go:130] >            fallthrough
	I0114 02:32:25.530421    9007 command_runner.go:130] >         }
	I0114 02:32:25.530425    9007 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0114 02:32:25.530434    9007 command_runner.go:130] >            max_concurrent 1000
	I0114 02:32:25.530438    9007 command_runner.go:130] >         }
	I0114 02:32:25.530442    9007 command_runner.go:130] >         cache 30
	I0114 02:32:25.530446    9007 command_runner.go:130] >         loop
	I0114 02:32:25.530453    9007 command_runner.go:130] >         reload
	I0114 02:32:25.530458    9007 command_runner.go:130] >         loadbalance
	I0114 02:32:25.530461    9007 command_runner.go:130] >     }
	I0114 02:32:25.530465    9007 command_runner.go:130] > kind: ConfigMap
	I0114 02:32:25.530468    9007 command_runner.go:130] > metadata:
	I0114 02:32:25.530473    9007 command_runner.go:130] >   creationTimestamp: "2023-01-14T10:28:57Z"
	I0114 02:32:25.530476    9007 command_runner.go:130] >   name: coredns
	I0114 02:32:25.530480    9007 command_runner.go:130] >   namespace: kube-system
	I0114 02:32:25.530484    9007 command_runner.go:130] >   resourceVersion: "373"
	I0114 02:32:25.530489    9007 command_runner.go:130] >   uid: 014885d0-0d84-4e89-ad26-9cef16bc04dc
	I0114 02:32:25.530566    9007 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0114 02:32:25.530598    9007 cli_runner.go:164] Run: docker container inspect multinode-022829 --format={{.State.Status}}
	I0114 02:32:25.530710    9007 cli_runner.go:164] Run: docker container inspect multinode-022829 --format={{.State.Status}}
	I0114 02:32:25.541611    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:25.595371    9007 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:32:25.616343    9007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 02:32:25.616674    9007 kapi.go:59] client config for multinode-022829: &rest.Config{Host:"https://127.0.0.1:51427", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 02:32:25.674638    9007 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 02:32:25.674664    9007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 02:32:25.674897    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:25.675730    9007 round_trippers.go:463] GET https://127.0.0.1:51427/apis/storage.k8s.io/v1/storageclasses
	I0114 02:32:25.675931    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.676004    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.676022    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.692618    9007 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0114 02:32:25.692650    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.692659    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.692664    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.692691    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.692697    9007 round_trippers.go:580]     Content-Length: 1273
	I0114 02:32:25.692702    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.692707    9007 round_trippers.go:580]     Audit-Id: d846d4fc-122c-4cd1-86eb-d5636caf93e9
	I0114 02:32:25.692713    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.692750    9007 request.go:1154] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"792"},"items":[{"metadata":{"name":"standard","uid":"22ee76b0-cf85-4ae8-85bf-3be3c87e3cba","resourceVersion":"382","creationTimestamp":"2023-01-14T10:29:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0114 02:32:25.693168    9007 request.go:1154] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"22ee76b0-cf85-4ae8-85bf-3be3c87e3cba","resourceVersion":"382","creationTimestamp":"2023-01-14T10:29:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0114 02:32:25.693200    9007 round_trippers.go:463] PUT https://127.0.0.1:51427/apis/storage.k8s.io/v1/storageclasses/standard
	I0114 02:32:25.693204    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.693211    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.693217    9007 round_trippers.go:473]     Content-Type: application/json
	I0114 02:32:25.693222    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.696781    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:25.696793    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.696799    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.696822    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.696830    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.696837    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.696844    9007 round_trippers.go:580]     Content-Length: 1220
	I0114 02:32:25.696850    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.696854    9007 round_trippers.go:580]     Audit-Id: 3c8efc8c-029f-486f-9f00-edf0e1069039
	I0114 02:32:25.696879    9007 request.go:1154] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"22ee76b0-cf85-4ae8-85bf-3be3c87e3cba","resourceVersion":"382","creationTimestamp":"2023-01-14T10:29:12Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-01-14T10:29:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0114 02:32:25.696952    9007 addons.go:227] Setting addon default-storageclass=true in "multinode-022829"
	W0114 02:32:25.696960    9007 addons.go:236] addon default-storageclass should already be in state true
	I0114 02:32:25.696980    9007 host.go:66] Checking if "multinode-022829" exists ...
	I0114 02:32:25.697379    9007 cli_runner.go:164] Run: docker container inspect multinode-022829 --format={{.State.Status}}
	I0114 02:32:25.699132    9007 node_ready.go:35] waiting up to 6m0s for node "multinode-022829" to be "Ready" ...
	I0114 02:32:25.699213    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:25.699218    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.699224    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.699230    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.701880    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:25.701896    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.701904    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.701933    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.701946    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.701956    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.701967    9007 round_trippers.go:580]     Audit-Id: aee60fdc-b101-4e8a-be69-58d01f68c4f7
	I0114 02:32:25.701977    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.702146    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:25.702381    9007 node_ready.go:49] node "multinode-022829" has status "Ready":"True"
	I0114 02:32:25.702390    9007 node_ready.go:38] duration metric: took 3.238289ms waiting for node "multinode-022829" to be "Ready" ...
	I0114 02:32:25.702396    9007 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 02:32:25.736900    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:25.756451    9007 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 02:32:25.756463    9007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 02:32:25.756553    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:25.813585    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:25.827418    9007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 02:32:25.847507    9007 request.go:614] Waited for 145.076103ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:25.847571    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:25.847576    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:25.847583    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:25.847590    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:25.851705    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:25.851722    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:25.851731    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:25.851737    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:25.851744    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:25.851752    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:25 GMT
	I0114 02:32:25.851759    9007 round_trippers.go:580]     Audit-Id: 34b9c5ce-2a62-4e22-aedb-f16503fa37b0
	I0114 02:32:25.851765    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:25.854725    9007 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"792"},"items":[{"metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"756","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84540 chars]
	I0114 02:32:25.856827    9007 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-xg88j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:25.904132    9007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 02:32:26.038521    9007 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0114 02:32:26.039965    9007 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0114 02:32:26.042071    9007 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0114 02:32:26.044067    9007 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0114 02:32:26.045826    9007 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0114 02:32:26.047746    9007 request.go:614] Waited for 190.883825ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/coredns-565d847f94-xg88j
	I0114 02:32:26.047772    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/coredns-565d847f94-xg88j
	I0114 02:32:26.047777    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:26.047783    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:26.047789    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:26.050023    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:26.050036    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:26.050044    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:26.050055    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 02:32:26.050060    9007 round_trippers.go:580]     Audit-Id: b98b5afb-8c3e-4f3d-b6a7-5d29ef084765
	I0114 02:32:26.050065    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:26.050070    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:26.050075    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:26.050148    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"756","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6552 chars]
	I0114 02:32:26.052282    9007 command_runner.go:130] > pod/storage-provisioner configured
	I0114 02:32:26.154125    9007 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0114 02:32:26.202699    9007 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0114 02:32:26.223596    9007 addons.go:488] enableAddons completed in 759.101003ms
	I0114 02:32:26.248148    9007 request.go:614] Waited for 197.660565ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:26.248237    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:26.248248    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:26.248261    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:26.248274    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:26.252257    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:26.252273    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:26.252281    9007 round_trippers.go:580]     Audit-Id: 371364d0-f414-4639-b3d7-b080f86132e0
	I0114 02:32:26.252288    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:26.252295    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:26.252302    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:26.252309    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:26.252315    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 02:32:26.252402    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:26.252642    9007 pod_ready.go:92] pod "coredns-565d847f94-xg88j" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:26.252648    9007 pod_ready.go:81] duration metric: took 395.809499ms waiting for pod "coredns-565d847f94-xg88j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:26.252654    9007 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:26.448647    9007 request.go:614] Waited for 195.925176ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:26.448711    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/etcd-multinode-022829
	I0114 02:32:26.448727    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:26.448744    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:26.448761    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:26.453218    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:26.453233    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:26.453240    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:26.453245    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:26.453249    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:26.453256    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:26.453261    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 02:32:26.453266    9007 round_trippers.go:580]     Audit-Id: 2288153a-1320-4bd8-b5e4-10b788caf9e5
	I0114 02:32:26.453345    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-022829","namespace":"kube-system","uid":"f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8","resourceVersion":"765","creationTimestamp":"2023-01-14T10:28:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.mirror":"11becf75657d982fbe9c634e29b1fbd1","kubernetes.io/config.seen":"2023-01-14T10:28:57.641513352Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6045 chars]
	I0114 02:32:26.648599    9007 request.go:614] Waited for 194.955808ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:26.648656    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:26.648667    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:26.648680    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:26.648726    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:26.652899    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:26.652910    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:26.652916    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 02:32:26.652921    9007 round_trippers.go:580]     Audit-Id: 1e2102f6-a764-4629-91e8-8b401d477418
	I0114 02:32:26.652925    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:26.652931    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:26.652935    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:26.652941    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:26.653029    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:26.653246    9007 pod_ready.go:92] pod "etcd-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:26.653252    9007 pod_ready.go:81] duration metric: took 400.592504ms waiting for pod "etcd-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:26.653263    9007 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:26.847559    9007 request.go:614] Waited for 194.260248ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:26.847615    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-022829
	I0114 02:32:26.847621    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:26.847630    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:26.847645    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:26.850432    9007 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0114 02:32:26.850443    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:26.850449    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:26.850454    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:26.850465    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:26.850471    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:26 GMT
	I0114 02:32:26.850491    9007 round_trippers.go:580]     Audit-Id: 9c5c3248-e25c-48ec-8c10-a53fefa40371
	I0114 02:32:26.850499    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:26.850669    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-022829","namespace":"kube-system","uid":"b153813e-4767-4643-9cc4-ab5c1f8a2441","resourceVersion":"792","creationTimestamp":"2023-01-14T10:28:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.mirror":"5d602ff7abd9993465b2bbbad612da88","kubernetes.io/config.seen":"2023-01-14T10:28:57.641524048Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8429 chars]
	I0114 02:32:27.047575    9007 request.go:614] Waited for 196.597625ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:27.047656    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:27.047666    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:27.047679    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:27.047689    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:27.052001    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:27.052018    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:27.052030    9007 round_trippers.go:580]     Audit-Id: 4f760086-4c49-4c81-99c3-7353707cd9d7
	I0114 02:32:27.052036    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:27.052041    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:27.052050    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:27.052056    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:27.052060    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 02:32:27.052117    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:27.052324    9007 pod_ready.go:92] pod "kube-apiserver-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:27.052331    9007 pod_ready.go:81] duration metric: took 399.061734ms waiting for pod "kube-apiserver-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:27.052338    9007 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:27.247614    9007 request.go:614] Waited for 195.234933ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-022829
	I0114 02:32:27.247696    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-022829
	I0114 02:32:27.247707    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:27.247724    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:27.247737    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:27.251622    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:27.251637    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:27.251645    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 02:32:27.251652    9007 round_trippers.go:580]     Audit-Id: a6c57865-700f-452f-9ea6-37b6e70839c1
	I0114 02:32:27.251658    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:27.251665    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:27.251671    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:27.251678    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:27.252098    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-022829","namespace":"kube-system","uid":"3ecd3fea-11b6-4dd0-9ac1-200f293b0e22","resourceVersion":"768","creationTimestamp":"2023-01-14T10:28:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cf97c75e822bcdf884e5298e8f141a84","kubernetes.io/config.mirror":"cf97c75e822bcdf884e5298e8f141a84","kubernetes.io/config.seen":"2023-01-14T10:28:46.070561468Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8004 chars]
	I0114 02:32:27.449487    9007 request.go:614] Waited for 197.08782ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:27.449568    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:27.449578    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:27.449620    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:27.449632    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:27.453615    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:27.453629    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:27.453638    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:27.453649    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:27.453656    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:27.453663    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:27.453670    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 02:32:27.453677    9007 round_trippers.go:580]     Audit-Id: f0ca6b36-204f-4cee-b89b-40c2015540dd
	I0114 02:32:27.453759    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:27.454039    9007 pod_ready.go:92] pod "kube-controller-manager-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:27.454049    9007 pod_ready.go:81] duration metric: took 401.705514ms waiting for pod "kube-controller-manager-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:27.454059    9007 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6bgqj" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:27.648864    9007 request.go:614] Waited for 194.74606ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-6bgqj
	I0114 02:32:27.648983    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-6bgqj
	I0114 02:32:27.648995    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:27.649006    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:27.649018    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:27.653516    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:27.653534    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:27.653542    9007 round_trippers.go:580]     Audit-Id: 990f399a-b351-4eb2-8b75-da91279d7703
	I0114 02:32:27.653565    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:27.653579    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:27.653592    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:27.653600    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:27.653608    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 02:32:27.653680    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6bgqj","generateName":"kube-proxy-","namespace":"kube-system","uid":"330a14fa-1ce0-4857-81a1-2988087382d4","resourceVersion":"679","creationTimestamp":"2023-01-14T10:30:18Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c7ecd323-5445-4d86-89b2-536132fa201e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:30:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7ecd323-5445-4d86-89b2-536132fa201e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I0114 02:32:27.848059    9007 request.go:614] Waited for 194.052924ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m03
	I0114 02:32:27.848108    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m03
	I0114 02:32:27.848117    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:27.848129    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:27.848141    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:27.852661    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:27.852675    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:27.852681    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:27.852686    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:27 GMT
	I0114 02:32:27.852696    9007 round_trippers.go:580]     Audit-Id: 71eaddfb-05a5-480c-baf3-2b5a085d7d02
	I0114 02:32:27.852702    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:27.852706    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:27.852711    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:27.852770    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829-m03","uid":"24958ca9-a14e-431f-b462-d3bfbcd7c387","resourceVersion":"692","creationTimestamp":"2023-01-14T10:31:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:31:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4323 chars]
	I0114 02:32:27.852951    9007 pod_ready.go:92] pod "kube-proxy-6bgqj" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:27.852958    9007 pod_ready.go:81] duration metric: took 398.892958ms waiting for pod "kube-proxy-6bgqj" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:27.852965    9007 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7p92j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:28.047846    9007 request.go:614] Waited for 194.814141ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-7p92j
	I0114 02:32:28.047917    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-7p92j
	I0114 02:32:28.047958    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:28.047973    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:28.047984    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:28.052582    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:28.052595    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:28.052601    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:28.052606    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:28.052611    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:28.052620    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:28.052625    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 02:32:28.052630    9007 round_trippers.go:580]     Audit-Id: c8872bae-9f98-4358-bf26-d15addc55588
	I0114 02:32:28.052700    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7p92j","generateName":"kube-proxy-","namespace":"kube-system","uid":"abe462b8-5607-4e29-b040-12678d7ec756","resourceVersion":"473","creationTimestamp":"2023-01-14T10:29:34Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c7ecd323-5445-4d86-89b2-536132fa201e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7ecd323-5445-4d86-89b2-536132fa201e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5536 chars]
	I0114 02:32:28.248933    9007 request.go:614] Waited for 195.87012ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m02
	I0114 02:32:28.248986    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m02
	I0114 02:32:28.248999    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:28.249012    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:28.249024    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:28.252824    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:28.252837    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:28.252843    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 02:32:28.252853    9007 round_trippers.go:580]     Audit-Id: ffbb1ceb-915d-4550-b4af-3aaea48b6945
	I0114 02:32:28.252859    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:28.252863    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:28.252868    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:28.252873    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:28.252930    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829-m02","uid":"3911d4d5-57fa-4f76-9a4f-ea2b104e8003","resourceVersion":"538","creationTimestamp":"2023-01-14T10:29:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4506 chars]
	I0114 02:32:28.253118    9007 pod_ready.go:92] pod "kube-proxy-7p92j" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:28.253125    9007 pod_ready.go:81] duration metric: took 400.142072ms waiting for pod "kube-proxy-7p92j" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:28.253135    9007 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pplrc" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:28.447672    9007 request.go:614] Waited for 194.483956ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-pplrc
	I0114 02:32:28.447800    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-proxy-pplrc
	I0114 02:32:28.447812    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:28.447824    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:28.447838    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:28.451803    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:28.451818    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:28.451827    9007 round_trippers.go:580]     Audit-Id: 7a46278e-e3e7-4a9d-beda-1f1265525c87
	I0114 02:32:28.451834    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:28.451841    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:28.451847    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:28.451853    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:28.451861    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 02:32:28.452055    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pplrc","generateName":"kube-proxy-","namespace":"kube-system","uid":"f6acf6b8-0d1e-4694-85de-f70fb0bcfee7","resourceVersion":"743","creationTimestamp":"2023-01-14T10:29:10Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"c7ecd323-5445-4d86-89b2-536132fa201e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7ecd323-5445-4d86-89b2-536132fa201e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5735 chars]
	I0114 02:32:28.649514    9007 request.go:614] Waited for 197.145139ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:28.649645    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:28.649656    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:28.649669    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:28.649681    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:28.654083    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:28.654100    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:28.654108    9007 round_trippers.go:580]     Audit-Id: 80134d0f-7081-458d-9566-ea651b337b18
	I0114 02:32:28.654115    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:28.654123    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:28.654129    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:28.654138    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:28.654144    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 02:32:28.654219    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:28.654447    9007 pod_ready.go:92] pod "kube-proxy-pplrc" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:28.654454    9007 pod_ready.go:81] duration metric: took 401.311363ms waiting for pod "kube-proxy-pplrc" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:28.654461    9007 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:28.848320    9007 request.go:614] Waited for 193.806303ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022829
	I0114 02:32:28.848432    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-022829
	I0114 02:32:28.848443    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:28.848455    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:28.848465    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:28.852696    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:28.852711    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:28.852719    9007 round_trippers.go:580]     Audit-Id: 3ed19b2f-86f7-4fdc-a901-6c407dd3f1fc
	I0114 02:32:28.852726    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:28.852738    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:28.852747    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:28.852757    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:28.852785    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:28 GMT
	I0114 02:32:28.852842    9007 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-022829","namespace":"kube-system","uid":"dec76631-6f7c-433f-87e4-2d0c847b6f29","resourceVersion":"781","creationTimestamp":"2023-01-14T10:28:55Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"802a051d3df9ed6e4a14219bcab9d87d","kubernetes.io/config.mirror":"802a051d3df9ed6e4a14219bcab9d87d","kubernetes.io/config.seen":"2023-01-14T10:28:46.070562243Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:28:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4886 chars]
	I0114 02:32:29.048257    9007 request.go:614] Waited for 195.179619ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:29.048344    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes/multinode-022829
	I0114 02:32:29.048354    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:29.048366    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:29.048379    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:29.052682    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:29.052698    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:29.052705    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 02:32:29.052710    9007 round_trippers.go:580]     Audit-Id: 6239e34e-c627-4a3f-9abe-e3d872c4338f
	I0114 02:32:29.052715    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:29.052720    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:29.052725    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:29.052730    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:29.052791    9007 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-01-14T10:28:54Z","fieldsType":"FieldsV1","fi [truncated 5277 chars]
	I0114 02:32:29.053009    9007 pod_ready.go:92] pod "kube-scheduler-multinode-022829" in "kube-system" namespace has status "Ready":"True"
	I0114 02:32:29.053016    9007 pod_ready.go:81] duration metric: took 398.549335ms waiting for pod "kube-scheduler-multinode-022829" in "kube-system" namespace to be "Ready" ...
	I0114 02:32:29.053026    9007 pod_ready.go:38] duration metric: took 3.350613929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 02:32:29.053039    9007 api_server.go:51] waiting for apiserver process to appear ...
	I0114 02:32:29.053100    9007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:32:29.062410    9007 command_runner.go:130] > 1733
	I0114 02:32:29.063052    9007 api_server.go:71] duration metric: took 3.598589987s to wait for apiserver process to appear ...
	I0114 02:32:29.063062    9007 api_server.go:87] waiting for apiserver healthz status ...
	I0114 02:32:29.063068    9007 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51427/healthz ...
	I0114 02:32:29.068495    9007 api_server.go:278] https://127.0.0.1:51427/healthz returned 200:
	ok
	I0114 02:32:29.068534    9007 round_trippers.go:463] GET https://127.0.0.1:51427/version
	I0114 02:32:29.068540    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:29.068547    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:29.068553    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:29.069712    9007 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0114 02:32:29.069721    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:29.069727    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:29.069734    9007 round_trippers.go:580]     Content-Length: 263
	I0114 02:32:29.069740    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 02:32:29.069745    9007 round_trippers.go:580]     Audit-Id: e96b41a7-3c6f-4e31-900b-d736e335fad9
	I0114 02:32:29.069751    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:29.069755    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:29.069760    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:29.069775    9007 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0114 02:32:29.069799    9007 api_server.go:140] control plane version: v1.25.3
	I0114 02:32:29.069805    9007 api_server.go:130] duration metric: took 6.739418ms to wait for apiserver health ...
	I0114 02:32:29.069812    9007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 02:32:29.247714    9007 request.go:614] Waited for 177.857132ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:29.247770    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:29.247783    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:29.247808    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:29.247863    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:29.253465    9007 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0114 02:32:29.253480    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:29.253487    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:29.253491    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 02:32:29.253499    9007 round_trippers.go:580]     Audit-Id: bb243b56-e120-45db-bea5-3c4152e1e1a6
	I0114 02:32:29.253504    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:29.253509    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:29.253513    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:29.254460    9007 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"792"},"items":[{"metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"756","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84540 chars]
	I0114 02:32:29.256384    9007 system_pods.go:59] 12 kube-system pods found
	I0114 02:32:29.256394    9007 system_pods.go:61] "coredns-565d847f94-xg88j" [8ba9cbef-253e-46ad-aa78-55875dc5939b] Running
	I0114 02:32:29.256398    9007 system_pods.go:61] "etcd-multinode-022829" [f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8] Running
	I0114 02:32:29.256401    9007 system_pods.go:61] "kindnet-2ffw5" [6e2e34df-4259-4f9d-a1d8-b7c33a252211] Running
	I0114 02:32:29.256405    9007 system_pods.go:61] "kindnet-crlwb" [129cddf5-10fc-4467-ab9d-d9a47d195213] Running
	I0114 02:32:29.256409    9007 system_pods.go:61] "kindnet-pqh2t" [cb280495-e617-461c-a259-e28b47f301d6] Running
	I0114 02:32:29.256414    9007 system_pods.go:61] "kube-apiserver-multinode-022829" [b153813e-4767-4643-9cc4-ab5c1f8a2441] Running
	I0114 02:32:29.256419    9007 system_pods.go:61] "kube-controller-manager-multinode-022829" [3ecd3fea-11b6-4dd0-9ac1-200f293b0e22] Running
	I0114 02:32:29.256424    9007 system_pods.go:61] "kube-proxy-6bgqj" [330a14fa-1ce0-4857-81a1-2988087382d4] Running
	I0114 02:32:29.256428    9007 system_pods.go:61] "kube-proxy-7p92j" [abe462b8-5607-4e29-b040-12678d7ec756] Running
	I0114 02:32:29.256432    9007 system_pods.go:61] "kube-proxy-pplrc" [f6acf6b8-0d1e-4694-85de-f70fb0bcfee7] Running
	I0114 02:32:29.256438    9007 system_pods.go:61] "kube-scheduler-multinode-022829" [dec76631-6f7c-433f-87e4-2d0c847b6f29] Running
	I0114 02:32:29.256456    9007 system_pods.go:61] "storage-provisioner" [29960f5b-1391-43dd-9ebb-93c76a894fa2] Running
	I0114 02:32:29.256465    9007 system_pods.go:74] duration metric: took 186.648054ms to wait for pod list to return data ...
	I0114 02:32:29.256473    9007 default_sa.go:34] waiting for default service account to be created ...
	I0114 02:32:29.447712    9007 request.go:614] Waited for 191.192358ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/default/serviceaccounts
	I0114 02:32:29.447791    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/default/serviceaccounts
	I0114 02:32:29.447801    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:29.447845    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:29.447862    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:29.451925    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:29.451936    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:29.451942    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 02:32:29.451952    9007 round_trippers.go:580]     Audit-Id: 35d9a91e-de35-4fd6-8d50-bc367017e522
	I0114 02:32:29.451958    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:29.451962    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:29.451967    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:29.451972    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:29.451978    9007 round_trippers.go:580]     Content-Length: 261
	I0114 02:32:29.451990    9007 request.go:1154] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"792"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5c806b58-da2e-4969-a790-2c7b416acba0","resourceVersion":"316","creationTimestamp":"2023-01-14T10:29:10Z"}}]}
	I0114 02:32:29.452111    9007 default_sa.go:45] found service account: "default"
	I0114 02:32:29.452118    9007 default_sa.go:55] duration metric: took 195.640566ms for default service account to be created ...
	I0114 02:32:29.452123    9007 system_pods.go:116] waiting for k8s-apps to be running ...
	I0114 02:32:29.647824    9007 request.go:614] Waited for 195.660293ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:29.647904    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/namespaces/kube-system/pods
	I0114 02:32:29.647916    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:29.647929    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:29.647941    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:29.653264    9007 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0114 02:32:29.653288    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:29.653299    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:29.653307    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:29.653315    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 02:32:29.653322    9007 round_trippers.go:580]     Audit-Id: f0f7011f-dd67-4613-8936-a2cabf271ca7
	I0114 02:32:29.653330    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:29.653340    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:29.654372    9007 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"792"},"items":[{"metadata":{"name":"coredns-565d847f94-xg88j","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"8ba9cbef-253e-46ad-aa78-55875dc5939b","resourceVersion":"756","creationTimestamp":"2023-01-14T10:29:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"36b72df7-54d4-437c-ae0f-13924e39d8ca","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T10:29:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36b72df7-54d4-437c-ae0f-13924e39d8ca\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84540 chars]
	I0114 02:32:29.656330    9007 system_pods.go:86] 12 kube-system pods found
	I0114 02:32:29.656341    9007 system_pods.go:89] "coredns-565d847f94-xg88j" [8ba9cbef-253e-46ad-aa78-55875dc5939b] Running
	I0114 02:32:29.656350    9007 system_pods.go:89] "etcd-multinode-022829" [f307fa22-ea8d-4a38-b092-8fd0c7f8a7d8] Running
	I0114 02:32:29.656354    9007 system_pods.go:89] "kindnet-2ffw5" [6e2e34df-4259-4f9d-a1d8-b7c33a252211] Running
	I0114 02:32:29.656358    9007 system_pods.go:89] "kindnet-crlwb" [129cddf5-10fc-4467-ab9d-d9a47d195213] Running
	I0114 02:32:29.656362    9007 system_pods.go:89] "kindnet-pqh2t" [cb280495-e617-461c-a259-e28b47f301d6] Running
	I0114 02:32:29.656365    9007 system_pods.go:89] "kube-apiserver-multinode-022829" [b153813e-4767-4643-9cc4-ab5c1f8a2441] Running
	I0114 02:32:29.656372    9007 system_pods.go:89] "kube-controller-manager-multinode-022829" [3ecd3fea-11b6-4dd0-9ac1-200f293b0e22] Running
	I0114 02:32:29.656376    9007 system_pods.go:89] "kube-proxy-6bgqj" [330a14fa-1ce0-4857-81a1-2988087382d4] Running
	I0114 02:32:29.656380    9007 system_pods.go:89] "kube-proxy-7p92j" [abe462b8-5607-4e29-b040-12678d7ec756] Running
	I0114 02:32:29.656384    9007 system_pods.go:89] "kube-proxy-pplrc" [f6acf6b8-0d1e-4694-85de-f70fb0bcfee7] Running
	I0114 02:32:29.656387    9007 system_pods.go:89] "kube-scheduler-multinode-022829" [dec76631-6f7c-433f-87e4-2d0c847b6f29] Running
	I0114 02:32:29.656391    9007 system_pods.go:89] "storage-provisioner" [29960f5b-1391-43dd-9ebb-93c76a894fa2] Running
	I0114 02:32:29.656396    9007 system_pods.go:126] duration metric: took 204.267755ms to wait for k8s-apps to be running ...
	I0114 02:32:29.656400    9007 system_svc.go:44] waiting for kubelet service to be running ....
	I0114 02:32:29.656462    9007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 02:32:29.666195    9007 system_svc.go:56] duration metric: took 9.791057ms WaitForService to wait for kubelet.
	I0114 02:32:29.666208    9007 kubeadm.go:573] duration metric: took 4.201745398s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0114 02:32:29.666238    9007 node_conditions.go:102] verifying NodePressure condition ...
	I0114 02:32:29.849499    9007 request.go:614] Waited for 183.202107ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51427/api/v1/nodes
	I0114 02:32:29.849638    9007 round_trippers.go:463] GET https://127.0.0.1:51427/api/v1/nodes
	I0114 02:32:29.849649    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:29.849664    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:29.849674    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:29.854648    9007 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0114 02:32:29.854661    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:29.854668    9007 round_trippers.go:580]     Audit-Id: 9fdebe90-6a3a-43d4-8799-1f0266910e16
	I0114 02:32:29.854673    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:29.854677    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:29.854682    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:29.854691    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:29.854697    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:29 GMT
	I0114 02:32:29.854801    9007 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"792"},"items":[{"metadata":{"name":"multinode-022829","uid":"2272de5f-4e58-4e9d-af65-c768992dfe4e","resourceVersion":"697","creationTimestamp":"2023-01-14T10:28:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-022829","kubernetes.io/os":"linux","minikube.k8s.io/commit":"59da54e5a04973bd17dc62cf57cb4173bab7bf81","minikube.k8s.io/name":"multinode-022829","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_01_14T02_28_58_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 16143 chars]
	I0114 02:32:29.855226    9007 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:32:29.855234    9007 node_conditions.go:123] node cpu capacity is 6
	I0114 02:32:29.855244    9007 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:32:29.855248    9007 node_conditions.go:123] node cpu capacity is 6
	I0114 02:32:29.855251    9007 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:32:29.855255    9007 node_conditions.go:123] node cpu capacity is 6
	I0114 02:32:29.855258    9007 node_conditions.go:105] duration metric: took 189.012802ms to run NodePressure ...
	I0114 02:32:29.855265    9007 start.go:217] waiting for startup goroutines ...
	I0114 02:32:29.855836    9007 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:32:29.855919    9007 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/config.json ...
	I0114 02:32:29.897495    9007 out.go:177] * Starting worker node multinode-022829-m02 in cluster multinode-022829
	I0114 02:32:29.919592    9007 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 02:32:29.940701    9007 out.go:177] * Pulling base image ...
	I0114 02:32:29.961500    9007 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:32:29.961518    9007 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 02:32:29.961537    9007 cache.go:57] Caching tarball of preloaded images
	I0114 02:32:29.961739    9007 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 02:32:29.961761    9007 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 02:32:29.962645    9007 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/config.json ...
	I0114 02:32:30.018552    9007 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 02:32:30.018565    9007 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 02:32:30.018588    9007 cache.go:193] Successfully downloaded all kic artifacts
	I0114 02:32:30.018615    9007 start.go:364] acquiring machines lock for multinode-022829-m02: {Name:mk6c619d9d56cbda4f1a28e82601a01ccd5e065f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 02:32:30.018696    9007 start.go:368] acquired machines lock for "multinode-022829-m02" in 71.367µs
	I0114 02:32:30.018718    9007 start.go:96] Skipping create...Using existing machine configuration
	I0114 02:32:30.018724    9007 fix.go:55] fixHost starting: m02
	I0114 02:32:30.018992    9007 cli_runner.go:164] Run: docker container inspect multinode-022829-m02 --format={{.State.Status}}
	I0114 02:32:30.076194    9007 fix.go:103] recreateIfNeeded on multinode-022829-m02: state=Stopped err=<nil>
	W0114 02:32:30.076215    9007 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 02:32:30.098026    9007 out.go:177] * Restarting existing docker container for "multinode-022829-m02" ...
	I0114 02:32:30.139610    9007 cli_runner.go:164] Run: docker start multinode-022829-m02
	I0114 02:32:30.467610    9007 cli_runner.go:164] Run: docker container inspect multinode-022829-m02 --format={{.State.Status}}
	I0114 02:32:30.528233    9007 kic.go:426] container "multinode-022829-m02" state is running.
	I0114 02:32:30.528831    9007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829-m02
	I0114 02:32:30.590604    9007 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/config.json ...
	I0114 02:32:30.591117    9007 machine.go:88] provisioning docker machine ...
	I0114 02:32:30.591134    9007 ubuntu.go:169] provisioning hostname "multinode-022829-m02"
	I0114 02:32:30.591213    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:30.667576    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:32:30.667849    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51454 <nil> <nil>}
	I0114 02:32:30.667860    9007 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-022829-m02 && echo "multinode-022829-m02" | sudo tee /etc/hostname
	I0114 02:32:30.836800    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-022829-m02
	
	I0114 02:32:30.836900    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:30.897502    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:32:30.897676    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51454 <nil> <nil>}
	I0114 02:32:30.897689    9007 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-022829-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-022829-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-022829-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 02:32:31.013909    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 02:32:31.013941    9007 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
	I0114 02:32:31.013966    9007 ubuntu.go:177] setting up certificates
	I0114 02:32:31.013975    9007 provision.go:83] configureAuth start
	I0114 02:32:31.014100    9007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829-m02
	I0114 02:32:31.076266    9007 provision.go:138] copyHostCerts
	I0114 02:32:31.076317    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 02:32:31.076392    9007 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
	I0114 02:32:31.076398    9007 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 02:32:31.076546    9007 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
	I0114 02:32:31.076723    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 02:32:31.076772    9007 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
	I0114 02:32:31.076777    9007 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 02:32:31.076852    9007 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
	I0114 02:32:31.076977    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 02:32:31.077029    9007 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
	I0114 02:32:31.077035    9007 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 02:32:31.077105    9007 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
	I0114 02:32:31.077233    9007 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.multinode-022829-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-022829-m02]
	I0114 02:32:31.155049    9007 provision.go:172] copyRemoteCerts
	I0114 02:32:31.155122    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 02:32:31.155190    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:31.221275    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51454 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829-m02/id_rsa Username:docker}
	I0114 02:32:31.310642    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0114 02:32:31.310729    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 02:32:31.328177    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0114 02:32:31.328267    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0114 02:32:31.346801    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0114 02:32:31.346906    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 02:32:31.364598    9007 provision.go:86] duration metric: configureAuth took 350.595855ms
	I0114 02:32:31.364611    9007 ubuntu.go:193] setting minikube options for container-runtime
	I0114 02:32:31.364801    9007 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:32:31.364882    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:31.426464    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:32:31.426642    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51454 <nil> <nil>}
	I0114 02:32:31.426652    9007 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 02:32:31.543817    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 02:32:31.543830    9007 ubuntu.go:71] root file system type: overlay
	I0114 02:32:31.543964    9007 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 02:32:31.544047    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:31.602200    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:32:31.602359    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51454 <nil> <nil>}
	I0114 02:32:31.602408    9007 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 02:32:31.727457    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 02:32:31.727576    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:31.784996    9007 main.go:134] libmachine: Using SSH client type: native
	I0114 02:32:31.785151    9007 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 51454 <nil> <nil>}
	I0114 02:32:31.785164    9007 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 02:32:31.905443    9007 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 02:32:31.905466    9007 machine.go:91] provisioned docker machine in 1.314337908s
	I0114 02:32:31.905474    9007 start.go:300] post-start starting for "multinode-022829-m02" (driver="docker")
	I0114 02:32:31.905480    9007 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 02:32:31.905567    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 02:32:31.905636    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:31.962523    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51454 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829-m02/id_rsa Username:docker}
	I0114 02:32:32.050051    9007 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 02:32:32.053688    9007 command_runner.go:130] > NAME="Ubuntu"
	I0114 02:32:32.053700    9007 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0114 02:32:32.053704    9007 command_runner.go:130] > ID=ubuntu
	I0114 02:32:32.053710    9007 command_runner.go:130] > ID_LIKE=debian
	I0114 02:32:32.053717    9007 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0114 02:32:32.053722    9007 command_runner.go:130] > VERSION_ID="20.04"
	I0114 02:32:32.053729    9007 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0114 02:32:32.053735    9007 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0114 02:32:32.053740    9007 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0114 02:32:32.053750    9007 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0114 02:32:32.053755    9007 command_runner.go:130] > VERSION_CODENAME=focal
	I0114 02:32:32.053759    9007 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0114 02:32:32.053798    9007 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 02:32:32.053812    9007 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 02:32:32.053819    9007 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 02:32:32.053824    9007 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 02:32:32.053829    9007 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
	I0114 02:32:32.053933    9007 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
	I0114 02:32:32.054115    9007 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
	I0114 02:32:32.054123    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> /etc/ssl/certs/27282.pem
	I0114 02:32:32.054334    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 02:32:32.061876    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:32:32.078838    9007 start.go:303] post-start completed in 173.354372ms
	I0114 02:32:32.078925    9007 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 02:32:32.078994    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:32.137115    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51454 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829-m02/id_rsa Username:docker}
	I0114 02:32:32.219758    9007 command_runner.go:130] > 7%!
	(MISSING)I0114 02:32:32.219848    9007 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 02:32:32.224323    9007 command_runner.go:130] > 91G
	I0114 02:32:32.224574    9007 fix.go:57] fixHost completed within 2.205843293s
	I0114 02:32:32.224585    9007 start.go:83] releasing machines lock for "multinode-022829-m02", held for 2.205876001s
	I0114 02:32:32.224678    9007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829-m02
	I0114 02:32:32.306498    9007 out.go:177] * Found network options:
	I0114 02:32:32.327676    9007 out.go:177]   - NO_PROXY=192.168.58.2
	W0114 02:32:32.348450    9007 proxy.go:119] fail to check proxy env: Error ip not in block
	W0114 02:32:32.348487    9007 proxy.go:119] fail to check proxy env: Error ip not in block
	I0114 02:32:32.348610    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0114 02:32:32.348612    9007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 02:32:32.348671    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:32.348691    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:32:32.409876    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51454 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829-m02/id_rsa Username:docker}
	I0114 02:32:32.411196    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51454 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829-m02/id_rsa Username:docker}
	I0114 02:32:32.548417    9007 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0114 02:32:32.548489    9007 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0114 02:32:32.561878    9007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:32:32.638916    9007 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0114 02:32:32.724893    9007 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 02:32:32.735612    9007 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0114 02:32:32.735763    9007 command_runner.go:130] > [Unit]
	I0114 02:32:32.735775    9007 command_runner.go:130] > Description=Docker Application Container Engine
	I0114 02:32:32.735780    9007 command_runner.go:130] > Documentation=https://docs.docker.com
	I0114 02:32:32.735784    9007 command_runner.go:130] > BindsTo=containerd.service
	I0114 02:32:32.735791    9007 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0114 02:32:32.735797    9007 command_runner.go:130] > Wants=network-online.target
	I0114 02:32:32.735809    9007 command_runner.go:130] > Requires=docker.socket
	I0114 02:32:32.735816    9007 command_runner.go:130] > StartLimitBurst=3
	I0114 02:32:32.735821    9007 command_runner.go:130] > StartLimitIntervalSec=60
	I0114 02:32:32.735828    9007 command_runner.go:130] > [Service]
	I0114 02:32:32.735834    9007 command_runner.go:130] > Type=notify
	I0114 02:32:32.735840    9007 command_runner.go:130] > Restart=on-failure
	I0114 02:32:32.735854    9007 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0114 02:32:32.735864    9007 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0114 02:32:32.735883    9007 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0114 02:32:32.735894    9007 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0114 02:32:32.735903    9007 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0114 02:32:32.735913    9007 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0114 02:32:32.735921    9007 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0114 02:32:32.735930    9007 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0114 02:32:32.735943    9007 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0114 02:32:32.735950    9007 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0114 02:32:32.735953    9007 command_runner.go:130] > ExecStart=
	I0114 02:32:32.735999    9007 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0114 02:32:32.736011    9007 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0114 02:32:32.736034    9007 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0114 02:32:32.736047    9007 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0114 02:32:32.736053    9007 command_runner.go:130] > LimitNOFILE=infinity
	I0114 02:32:32.736064    9007 command_runner.go:130] > LimitNPROC=infinity
	I0114 02:32:32.736075    9007 command_runner.go:130] > LimitCORE=infinity
	I0114 02:32:32.736086    9007 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0114 02:32:32.736095    9007 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0114 02:32:32.736100    9007 command_runner.go:130] > TasksMax=infinity
	I0114 02:32:32.736104    9007 command_runner.go:130] > TimeoutStartSec=0
	I0114 02:32:32.736110    9007 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0114 02:32:32.736115    9007 command_runner.go:130] > Delegate=yes
	I0114 02:32:32.736127    9007 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0114 02:32:32.736133    9007 command_runner.go:130] > KillMode=process
	I0114 02:32:32.736136    9007 command_runner.go:130] > [Install]
	I0114 02:32:32.736140    9007 command_runner.go:130] > WantedBy=multi-user.target
	I0114 02:32:32.736850    9007 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 02:32:32.736912    9007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 02:32:32.746888    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 02:32:32.759037    9007 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0114 02:32:32.759049    9007 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0114 02:32:32.759909    9007 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 02:32:32.832760    9007 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 02:32:32.909482    9007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:32:32.982395    9007 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 02:32:33.202325    9007 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0114 02:32:33.270478    9007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:32:33.352263    9007 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0114 02:32:33.362395    9007 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 02:32:33.362486    9007 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0114 02:32:33.366336    9007 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0114 02:32:33.366346    9007 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0114 02:32:33.366351    9007 command_runner.go:130] > Device: 100036h/1048630d	Inode: 128         Links: 1
	I0114 02:32:33.366357    9007 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0114 02:32:33.366366    9007 command_runner.go:130] > Access: 2023-01-14 10:32:32.654116379 +0000
	I0114 02:32:33.366371    9007 command_runner.go:130] > Modify: 2023-01-14 10:32:32.650116379 +0000
	I0114 02:32:33.366376    9007 command_runner.go:130] > Change: 2023-01-14 10:32:32.651116379 +0000
	I0114 02:32:33.366380    9007 command_runner.go:130] >  Birth: -
	I0114 02:32:33.366399    9007 start.go:472] Will wait 60s for crictl version
	I0114 02:32:33.366443    9007 ssh_runner.go:195] Run: which crictl
	I0114 02:32:33.370076    9007 command_runner.go:130] > /usr/bin/crictl
	I0114 02:32:33.370235    9007 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 02:32:33.397073    9007 command_runner.go:130] > Version:  0.1.0
	I0114 02:32:33.397086    9007 command_runner.go:130] > RuntimeName:  docker
	I0114 02:32:33.397090    9007 command_runner.go:130] > RuntimeVersion:  20.10.21
	I0114 02:32:33.397095    9007 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0114 02:32:33.399228    9007 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0114 02:32:33.399322    9007 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:32:33.427357    9007 command_runner.go:130] > 20.10.21
	I0114 02:32:33.429509    9007 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:32:33.455706    9007 command_runner.go:130] > 20.10.21
	I0114 02:32:33.481551    9007 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0114 02:32:33.523192    9007 out.go:177]   - env NO_PROXY=192.168.58.2
	I0114 02:32:33.544610    9007 cli_runner.go:164] Run: docker exec -t multinode-022829-m02 dig +short host.docker.internal
	I0114 02:32:33.654806    9007 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 02:32:33.654910    9007 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 02:32:33.659065    9007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:32:33.668923    9007 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829 for IP: 192.168.58.3
	I0114 02:32:33.669065    9007 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
	I0114 02:32:33.669137    9007 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
	I0114 02:32:33.669145    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0114 02:32:33.669173    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0114 02:32:33.669193    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0114 02:32:33.669215    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0114 02:32:33.669314    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
	W0114 02:32:33.669374    9007 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
	I0114 02:32:33.669388    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 02:32:33.669424    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
	I0114 02:32:33.669465    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
	I0114 02:32:33.669498    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
	I0114 02:32:33.669580    9007 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:32:33.669613    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> /usr/share/ca-certificates/27282.pem
	I0114 02:32:33.669636    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:33.669658    9007 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem -> /usr/share/ca-certificates/2728.pem
	I0114 02:32:33.669978    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 02:32:33.687135    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 02:32:33.704197    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 02:32:33.722104    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 02:32:33.739409    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
	I0114 02:32:33.756732    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 02:32:33.773943    9007 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
	I0114 02:32:33.791455    9007 ssh_runner.go:195] Run: openssl version
	I0114 02:32:33.796780    9007 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0114 02:32:33.797153    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
	I0114 02:32:33.805505    9007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
	I0114 02:32:33.809262    9007 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 02:32:33.809427    9007 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 02:32:33.809480    9007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
	I0114 02:32:33.814578    9007 command_runner.go:130] > 3ec20f2e
	I0114 02:32:33.814995    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 02:32:33.822667    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 02:32:33.830729    9007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:33.834645    9007 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:33.834753    9007 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:33.834810    9007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:32:33.839794    9007 command_runner.go:130] > b5213941
	I0114 02:32:33.840130    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 02:32:33.847595    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
	I0114 02:32:33.855728    9007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
	I0114 02:32:33.859822    9007 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 02:32:33.859847    9007 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 02:32:33.859891    9007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
	I0114 02:32:33.865041    9007 command_runner.go:130] > 51391683
	I0114 02:32:33.865444    9007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
	I0114 02:32:33.872970    9007 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 02:32:33.940129    9007 command_runner.go:130] > systemd
	I0114 02:32:33.942178    9007 cni.go:95] Creating CNI manager for ""
	I0114 02:32:33.942192    9007 cni.go:156] 3 nodes found, recommending kindnet
	I0114 02:32:33.942205    9007 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 02:32:33.942215    9007 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-022829 NodeName:multinode-022829-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 02:32:33.942317    9007 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-022829-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 02:32:33.942367    9007 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-022829-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-022829 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 02:32:33.942439    9007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 02:32:33.949950    9007 command_runner.go:130] > kubeadm
	I0114 02:32:33.949959    9007 command_runner.go:130] > kubectl
	I0114 02:32:33.949963    9007 command_runner.go:130] > kubelet
	I0114 02:32:33.950576    9007 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 02:32:33.950641    9007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0114 02:32:33.957907    9007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (482 bytes)
	I0114 02:32:33.970617    9007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 02:32:33.983243    9007 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0114 02:32:33.986967    9007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:32:33.996741    9007 host.go:66] Checking if "multinode-022829" exists ...
	I0114 02:32:33.996942    9007 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:32:33.996927    9007 start.go:286] JoinCluster: &{Name:multinode-022829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-022829 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false
metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:32:33.996994    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0114 02:32:33.997057    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:34.057064    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:34.189867    9007 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 
	I0114 02:32:34.189908    9007 start.go:299] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:32:34.189929    9007 host.go:66] Checking if "multinode-022829" exists ...
	I0114 02:32:34.190179    9007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-022829-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0114 02:32:34.190237    9007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:32:34.248470    9007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51423 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:32:34.374802    9007 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0114 02:32:34.398490    9007 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-crlwb, kube-system/kube-proxy-7p92j
	I0114 02:32:37.411146    9007 command_runner.go:130] > node/multinode-022829-m02 cordoned
	I0114 02:32:37.411160    9007 command_runner.go:130] > pod "busybox-65db55d5d6-tqh8p" has DeletionTimestamp older than 1 seconds, skipping
	I0114 02:32:37.411179    9007 command_runner.go:130] > node/multinode-022829-m02 drained
	I0114 02:32:37.411199    9007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-022829-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.22099785s)
	I0114 02:32:37.411208    9007 node.go:109] successfully drained node "m02"
	I0114 02:32:37.411547    9007 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:32:37.411775    9007 kapi.go:59] client config for multinode-022829: &rest.Config{Host:"https://127.0.0.1:51427", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/multinode-022829/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 02:32:37.412047    9007 request.go:1154] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0114 02:32:37.412079    9007 round_trippers.go:463] DELETE https://127.0.0.1:51427/api/v1/nodes/multinode-022829-m02
	I0114 02:32:37.412083    9007 round_trippers.go:469] Request Headers:
	I0114 02:32:37.412090    9007 round_trippers.go:473]     Accept: application/json, */*
	I0114 02:32:37.412095    9007 round_trippers.go:473]     Content-Type: application/json
	I0114 02:32:37.412100    9007 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0114 02:32:37.415764    9007 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0114 02:32:37.415776    9007 round_trippers.go:577] Response Headers:
	I0114 02:32:37.415786    9007 round_trippers.go:580]     Cache-Control: no-cache, private
	I0114 02:32:37.415791    9007 round_trippers.go:580]     Content-Type: application/json
	I0114 02:32:37.415796    9007 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f6ffccf5-192f-4dbf-843e-5e67f585e957
	I0114 02:32:37.415801    9007 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3945039f-fc05-4085-84ee-1c88d4da07a9
	I0114 02:32:37.415805    9007 round_trippers.go:580]     Content-Length: 171
	I0114 02:32:37.415811    9007 round_trippers.go:580]     Date: Sat, 14 Jan 2023 10:32:37 GMT
	I0114 02:32:37.415815    9007 round_trippers.go:580]     Audit-Id: 739df83f-6189-4f42-9d26-4657bc9bee4f
	I0114 02:32:37.415829    9007 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-022829-m02","kind":"nodes","uid":"3911d4d5-57fa-4f76-9a4f-ea2b104e8003"}}
	I0114 02:32:37.415857    9007 node.go:125] successfully deleted node "m02"
	I0114 02:32:37.415865    9007 start.go:303] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:32:37.415876    9007 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:32:37.415888    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02"
	I0114 02:32:37.487561    9007 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 02:32:37.596054    9007 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 02:32:37.596072    9007 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0114 02:32:37.613636    9007 command_runner.go:130] ! W0114 10:32:37.486754    1070 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:37.613655    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 02:32:37.613667    9007 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 02:32:37.613675    9007 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0114 02:32:37.613681    9007 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 02:32:37.613689    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 02:32:37.613702    9007 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 02:32:37.613708    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0114 02:32:37.613750    9007 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:32:37.486754    1070 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:37.613763    9007 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 02:32:37.613771    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 02:32:37.652178    9007 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0114 02:32:37.652195    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:37.655560    9007 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:37.655591    9007 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:32:37.486754    1070 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:48.704292    9007 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:32:48.704370    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02"
	I0114 02:32:48.744107    9007 command_runner.go:130] ! W0114 10:32:48.743294    1598 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:32:48.744887    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 02:32:48.768330    9007 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 02:32:48.772640    9007 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0114 02:32:48.834974    9007 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 02:32:48.834989    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 02:32:48.859544    9007 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 02:32:48.859559    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:48.862605    9007 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 02:32:48.862618    9007 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 02:32:48.862629    9007 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0114 02:32:48.862658    9007 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:32:48.743294    1598 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:48.862665    9007 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 02:32:48.862676    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 02:32:48.901575    9007 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0114 02:32:48.901592    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:48.901608    9007 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:32:48.901622    9007 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:32:48.743294    1598 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:10.509416    9007 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:33:10.509476    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02"
	I0114 02:33:10.550128    9007 command_runner.go:130] ! W0114 10:33:10.549307    1821 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:33:10.550143    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 02:33:10.573506    9007 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 02:33:10.578261    9007 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0114 02:33:10.641449    9007 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 02:33:10.641467    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 02:33:10.665877    9007 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 02:33:10.665890    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:10.669002    9007 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 02:33:10.669016    9007 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 02:33:10.669023    9007 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0114 02:33:10.669049    9007 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:10.549307    1821 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:10.669064    9007 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 02:33:10.669072    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 02:33:10.710078    9007 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0114 02:33:10.710095    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:10.710114    9007 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:10.710125    9007 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:10.549307    1821 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:36.913164    9007 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:33:36.913286    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02"
	I0114 02:33:36.951844    9007 command_runner.go:130] ! W0114 10:33:36.951096    2072 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:33:36.951859    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 02:33:36.974820    9007 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 02:33:36.979461    9007 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0114 02:33:37.042647    9007 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 02:33:37.042670    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 02:33:37.066765    9007 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 02:33:37.066780    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:37.069747    9007 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 02:33:37.069763    9007 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 02:33:37.069772    9007 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E0114 02:33:37.069817    9007 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:36.951096    2072 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:37.069829    9007 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 02:33:37.069859    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 02:33:37.109456    9007 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0114 02:33:37.109473    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:37.109493    9007 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:33:37.109504    9007 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:33:36.951096    2072 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:08.757659    9007 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:34:08.757763    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02"
	I0114 02:34:08.797250    9007 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 02:34:08.898628    9007 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 02:34:08.898642    9007 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0114 02:34:08.917294    9007 command_runner.go:130] ! W0114 10:34:08.796491    2383 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:34:08.917311    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 02:34:08.917322    9007 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 02:34:08.917328    9007 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0114 02:34:08.917333    9007 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 02:34:08.917339    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 02:34:08.917348    9007 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 02:34:08.917355    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0114 02:34:08.917382    9007 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:34:08.796491    2383 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:08.917392    9007 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 02:34:08.917400    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 02:34:08.959277    9007 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0114 02:34:08.959294    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:08.959313    9007 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:08.959324    9007 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:34:08.796491    2383 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:55.771184    9007 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0114 02:34:55.771249    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02"
	I0114 02:34:55.810924    9007 command_runner.go:130] > [preflight] Running pre-flight checks
	I0114 02:34:55.910562    9007 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0114 02:34:55.910604    9007 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0114 02:34:55.929717    9007 command_runner.go:130] ! W0114 10:34:55.810162    2790 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:34:55.929733    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0114 02:34:55.929744    9007 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 02:34:55.929750    9007 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0114 02:34:55.929755    9007 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0114 02:34:55.929763    9007 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0114 02:34:55.929773    9007 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0114 02:34:55.929780    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0114 02:34:55.929811    9007 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:34:55.810162    2790 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:55.929819    9007 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I0114 02:34:55.929826    9007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I0114 02:34:55.968698    9007 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0114 02:34:55.968718    9007 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:55.968736    9007 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0114 02:34:55.968756    9007 start.go:288] JoinCluster complete in 2m21.971491515s
	I0114 02:34:55.990848    9007 out.go:177] 
	W0114 02:34:56.011875    9007 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d019nt.ldbtlbbu4puhrscz --discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-022829-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0114 10:34:55.810162    2790 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-022829-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 02:34:56.011909    9007 out.go:239] * 
	W0114 02:34:56.013142    9007 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 02:34:56.076892    9007 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-14 10:31:58 UTC, end at Sat 2023-01-14 10:34:57 UTC. --
	Jan 14 10:32:01 multinode-022829 dockerd[129]: time="2023-01-14T10:32:01.272907910Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 14 10:32:01 multinode-022829 dockerd[129]: time="2023-01-14T10:32:01.273140732Z" level=info msg="Daemon shutdown complete"
	Jan 14 10:32:01 multinode-022829 systemd[1]: docker.service: Succeeded.
	Jan 14 10:32:01 multinode-022829 systemd[1]: Stopped Docker Application Container Engine.
	Jan 14 10:32:01 multinode-022829 systemd[1]: docker.service: Consumed 1.649s CPU time.
	Jan 14 10:32:01 multinode-022829 systemd[1]: Starting Docker Application Container Engine...
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.328615020Z" level=info msg="Starting up"
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.330420000Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.330459363Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.330474513Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.330482032Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.331562477Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.331601451Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.331613877Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.331620833Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.335339926Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.340433159Z" level=info msg="Loading containers: start."
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.441694157Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.478025328Z" level=info msg="Loading containers: done."
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.487595395Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.487664664Z" level=info msg="Daemon has completed initialization"
	Jan 14 10:32:01 multinode-022829 systemd[1]: Started Docker Application Container Engine.
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.509155224Z" level=info msg="API listen on [::]:2376"
	Jan 14 10:32:01 multinode-022829 dockerd[685]: time="2023-01-14T10:32:01.512403869Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 14 10:32:42 multinode-022829 dockerd[685]: time="2023-01-14T10:32:42.692709896Z" level=info msg="ignoring event" container=75564d6b4d6379eafdc84e4331f5adef7ff076f121b54d65c6efb7e51014e4b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	fd0ffbefdb59a       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   b16d1d685bab3
	2ce52a8f84be7       d6e3e26021b60                                                                                         2 minutes ago       Running             kindnet-cni               1                   f481555e016b2
	58d0138499d42       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   00eea7b06a7ea
	e7df9617d6ed9       5185b96f0becf                                                                                         2 minutes ago       Running             coredns                   1                   78c89eb9eb078
	f24b54fc23524       beaaf00edd38a                                                                                         2 minutes ago       Running             kube-proxy                1                   768802738cb8a
	75564d6b4d637       6e38f40d628db                                                                                         2 minutes ago       Exited              storage-provisioner       1                   b16d1d685bab3
	91c371b27d024       6d23ec0e8b87e                                                                                         2 minutes ago       Running             kube-scheduler            1                   2c5ae434f5558
	24f8955c8cfa3       0346dbd74bcb9                                                                                         2 minutes ago       Running             kube-apiserver            1                   1a3a54040f490
	0b995032f1854       a8a176a5d5d69                                                                                         2 minutes ago       Running             etcd                      1                   1acbcb9e524ab
	067c6e644f077       6039992312758                                                                                         2 minutes ago       Running             kube-controller-manager   1                   43fbde5b5ee80
	5e87b8547f497       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago       Exited              busybox                   0                   9c1890d684253
	22dfc551af5e4       5185b96f0becf                                                                                         5 minutes ago       Exited              coredns                   0                   2b17c5d2929ae
	85252b0696494       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              5 minutes ago       Exited              kindnet-cni               0                   ed5ada705ceea
	ed7a47472cbc5       beaaf00edd38a                                                                                         5 minutes ago       Exited              kube-proxy                0                   a7ee261cbfc68
	d3ae0d142c8f2       0346dbd74bcb9                                                                                         6 minutes ago       Exited              kube-apiserver            0                   32c139aa36179
	9048785f4e907       6039992312758                                                                                         6 minutes ago       Exited              kube-controller-manager   0                   2da5274a05416
	516991d5f2e5c       a8a176a5d5d69                                                                                         6 minutes ago       Exited              etcd                      0                   037848e173d9b
	22eb2357fc11b       6d23ec0e8b87e                                                                                         6 minutes ago       Exited              kube-scheduler            0                   88473b6a518e1
	
	* 
	* ==> coredns [22dfc551af5e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [e7df9617d6ed] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-022829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-022829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
	                    minikube.k8s.io/name=multinode-022829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_14T02_28_58_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:28:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-022829
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 10:34:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 10:32:11 +0000   Sat, 14 Jan 2023 10:28:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 10:32:11 +0000   Sat, 14 Jan 2023 10:28:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 10:32:11 +0000   Sat, 14 Jan 2023 10:28:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 10:32:11 +0000   Sat, 14 Jan 2023 10:29:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-022829
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                dc065f8e2d1f42529ccfe18f8b887c8c
	  Boot ID:                    1fa391b2-9843-4b7f-ae34-c4015ac7f4a2
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-586cr                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 coredns-565d847f94-xg88j                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     5m47s
	  kube-system                 etcd-multinode-022829                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m1s
	  kube-system                 kindnet-pqh2t                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m48s
	  kube-system                 kube-apiserver-multinode-022829             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-controller-manager-multinode-022829    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-proxy-pplrc                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  kube-system                 kube-scheduler-multinode-022829             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m46s                  kube-proxy       
	  Normal  Starting                 2m44s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m1s                   kubelet          Node multinode-022829 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m1s                   kubelet          Node multinode-022829 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s                   kubelet          Node multinode-022829 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m1s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           5m48s                  node-controller  Node multinode-022829 event: Registered Node multinode-022829 in Controller
	  Normal  NodeReady                5m40s                  kubelet          Node multinode-022829 status is now: NodeReady
	  Normal  Starting                 2m52s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m51s (x8 over 2m52s)  kubelet          Node multinode-022829 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m51s (x8 over 2m52s)  kubelet          Node multinode-022829 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s (x7 over 2m52s)  kubelet          Node multinode-022829 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m34s                  node-controller  Node multinode-022829 event: Registered Node multinode-022829 in Controller
	
	
	Name:               multinode-022829-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-022829-m02
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:32:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-022829-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 10:34:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 10:32:37 +0000   Sat, 14 Jan 2023 10:32:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 10:32:37 +0000   Sat, 14 Jan 2023 10:32:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 10:32:37 +0000   Sat, 14 Jan 2023 10:32:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 10:32:37 +0000   Sat, 14 Jan 2023 10:32:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-022829-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                dc065f8e2d1f42529ccfe18f8b887c8c
	  Boot ID:                    1fa391b2-9843-4b7f-ae34-c4015ac7f4a2
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-crlwb       100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m24s
	  kube-system                 kube-proxy-7p92j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 2m15s                  kube-proxy  
	  Normal  Starting                 5m18s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  5m24s (x2 over 5m24s)  kubelet     Node multinode-022829-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x2 over 5m24s)  kubelet     Node multinode-022829-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x2 over 5m24s)  kubelet     Node multinode-022829-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m24s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m3s                   kubelet     Node multinode-022829-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 2m27s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m21s (x7 over 2m27s)  kubelet     Node multinode-022829-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x7 over 2m27s)  kubelet     Node multinode-022829-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m27s)  kubelet     Node multinode-022829-m02 status is now: NodeHasSufficientPID
	
	
	Name:               multinode-022829-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-022829-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:31:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-022829-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 10:31:17 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 14 Jan 2023 10:31:17 +0000   Sat, 14 Jan 2023 10:33:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 14 Jan 2023 10:31:17 +0000   Sat, 14 Jan 2023 10:33:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 14 Jan 2023 10:31:17 +0000   Sat, 14 Jan 2023 10:33:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 14 Jan 2023 10:31:17 +0000   Sat, 14 Jan 2023 10:33:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.58.4
	  Hostname:    multinode-022829-m03
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                dc065f8e2d1f42529ccfe18f8b887c8c
	  Boot ID:                    1fa391b2-9843-4b7f-ae34-c4015ac7f4a2
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-qnfct    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 kindnet-2ffw5               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m40s
	  kube-system                 kube-proxy-6bgqj            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m33s                  kube-proxy       
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  Starting                 4m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m40s (x2 over 4m40s)  kubelet          Node multinode-022829-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s (x2 over 4m40s)  kubelet          Node multinode-022829-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s (x2 over 4m40s)  kubelet          Node multinode-022829-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m29s                  kubelet          Node multinode-022829-m03 status is now: NodeReady
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s (x2 over 3m51s)  kubelet          Node multinode-022829-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x2 over 3m51s)  kubelet          Node multinode-022829-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x2 over 3m51s)  kubelet          Node multinode-022829-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m41s                  kubelet          Node multinode-022829-m03 status is now: NodeReady
	  Normal  RegisteredNode           2m34s                  node-controller  Node multinode-022829-m03 event: Registered Node multinode-022829-m03 in Controller
	  Normal  NodeNotReady             114s                   node-controller  Node multinode-022829-m03 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* [  +0.000061] FS-Cache: O-key=[8] '69586c0400000000'
	[  +0.000051] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000052] FS-Cache: N-cookie d=000000004c8f7214{9p.inode} n=0000000037884123
	[  +0.000056] FS-Cache: N-key=[8] '69586c0400000000'
	[  +0.001452] FS-Cache: Duplicate cookie detected
	[  +0.000048] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000050] FS-Cache: O-cookie d=000000004c8f7214{9p.inode} n=00000000e96425fb
	[  +0.000055] FS-Cache: O-key=[8] '69586c0400000000'
	[  +0.000048] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000052] FS-Cache: N-cookie d=000000004c8f7214{9p.inode} n=000000002c8b6de5
	[  +0.000065] FS-Cache: N-key=[8] '69586c0400000000'
	[  +2.938998] FS-Cache: Duplicate cookie detected
	[  +0.000053] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000063] FS-Cache: O-cookie d=000000004c8f7214{9p.inode} n=00000000cf33e52e
	[  +0.000053] FS-Cache: O-key=[8] '68586c0400000000'
	[  +0.000044] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000062] FS-Cache: N-cookie d=000000004c8f7214{9p.inode} n=0000000028ebb5ff
	[  +0.000039] FS-Cache: N-key=[8] '68586c0400000000'
	[  +0.399425] FS-Cache: Duplicate cookie detected
	[  +0.000077] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000095] FS-Cache: O-cookie d=000000004c8f7214{9p.inode} n=000000005fb5370a
	[  +0.000107] FS-Cache: O-key=[8] '71586c0400000000'
	[  +0.000082] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000073] FS-Cache: N-cookie d=000000004c8f7214{9p.inode} n=000000006daca0f3
	[  +0.000104] FS-Cache: N-key=[8] '71586c0400000000'
	
	* 
	* ==> etcd [0b995032f185] <==
	* {"level":"info","ts":"2023-01-14T10:32:08.029Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-01-14T10:32:08.030Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-01-14T10:32:08.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-01-14T10:32:08.030Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-01-14T10:32:08.030Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:32:08.030Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:32:08.031Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-14T10:32:08.031Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-14T10:32:08.032Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T10:32:08.032Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-14T10:32:08.032Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-14T10:32:09.418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2023-01-14T10:32:09.418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-01-14T10:32:09.418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-14T10:32:09.418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2023-01-14T10:32:09.418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2023-01-14T10:32:09.418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2023-01-14T10:32:09.418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2023-01-14T10:32:09.421Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-022829 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:32:09.421Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:32:09.421Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:32:09.422Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:32:09.422Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T10:32:09.423Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-01-14T10:32:09.423Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [516991d5f2e5] <==
	* {"level":"info","ts":"2023-01-14T10:28:52.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-01-14T10:28:52.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-14T10:28:52.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-01-14T10:28:52.220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-01-14T10:28:52.220Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:28:52.221Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:28:52.221Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:28:52.221Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:28:52.221Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:28:52.221Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-022829 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:28:52.221Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T10:28:52.221Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:28:52.221Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:28:52.222Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-01-14T10:28:52.222Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-14T10:29:26.165Z","caller":"traceutil/trace.go:171","msg":"trace[947145312] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"231.832324ms","start":"2023-01-14T10:29:25.934Z","end":"2023-01-14T10:29:26.165Z","steps":["trace[947145312] 'process raft request'  (duration: 231.610016ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-14T10:30:09.195Z","caller":"traceutil/trace.go:171","msg":"trace[1262021912] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"178.089525ms","start":"2023-01-14T10:30:09.017Z","end":"2023-01-14T10:30:09.195Z","steps":["trace[1262021912] 'process raft request'  (duration: 176.915491ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-14T10:31:21.330Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-01-14T10:31:21.330Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"multinode-022829","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2023/01/14 10:31:21 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2023/01/14 10:31:21 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2023-01-14T10:31:21.341Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2023-01-14T10:31:21.343Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-14T10:31:21.344Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-01-14T10:31:21.344Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"multinode-022829","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> kernel <==
	*  10:34:58 up 34 min,  0 users,  load average: 0.47, 0.81, 0.67
	Linux multinode-022829 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [24f8955c8cfa] <==
	* I0114 10:32:10.940645       1 controller.go:85] Starting OpenAPI controller
	I0114 10:32:10.940877       1 controller.go:85] Starting OpenAPI V3 controller
	I0114 10:32:10.941081       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0114 10:32:10.941243       1 naming_controller.go:291] Starting NamingConditionController
	I0114 10:32:10.941324       1 establishing_controller.go:76] Starting EstablishingController
	I0114 10:32:10.941406       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0114 10:32:10.941449       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0114 10:32:10.941412       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0114 10:32:10.941645       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0114 10:32:11.022340       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0114 10:32:11.026961       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0114 10:32:11.033027       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0114 10:32:11.038110       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0114 10:32:11.038114       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0114 10:32:11.040617       1 cache.go:39] Caches are synced for autoregister controller
	I0114 10:32:11.040680       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0114 10:32:11.040867       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0114 10:32:11.755606       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0114 10:32:11.941903       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0114 10:32:13.525124       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0114 10:32:13.759799       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0114 10:32:13.824640       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0114 10:32:13.936971       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0114 10:32:13.942604       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0114 10:33:15.420629       1 controller.go:616] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [d3ae0d142c8f] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0114 10:31:31.312635       1 logging.go:59] [core] [Channel #190 SubChannel #191] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0114 10:31:31.333599       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0114 10:31:31.340836       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [067c6e644f07] <==
	* I0114 10:32:24.120958       1 event.go:294] "Event occurred" object="multinode-022829" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-022829 event: Registered Node multinode-022829 in Controller"
	I0114 10:32:24.121046       1 event.go:294] "Event occurred" object="multinode-022829-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-022829-m02 event: Registered Node multinode-022829-m02 in Controller"
	I0114 10:32:24.121070       1 event.go:294] "Event occurred" object="multinode-022829-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-022829-m03 event: Registered Node multinode-022829-m03 in Controller"
	I0114 10:32:24.123478       1 shared_informer.go:262] Caches are synced for attach detach
	I0114 10:32:24.125062       1 shared_informer.go:262] Caches are synced for daemon sets
	I0114 10:32:24.125121       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0114 10:32:24.127092       1 shared_informer.go:262] Caches are synced for GC
	I0114 10:32:24.132568       1 shared_informer.go:262] Caches are synced for TTL
	I0114 10:32:24.134023       1 shared_informer.go:262] Caches are synced for node
	I0114 10:32:24.134071       1 range_allocator.go:166] Starting range CIDR allocator
	I0114 10:32:24.134077       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0114 10:32:24.134085       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0114 10:32:24.222412       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0114 10:32:24.548861       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:32:24.620697       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:32:24.620771       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0114 10:32:34.410360       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-qnfct"
	W0114 10:32:37.415877       1 topologycache.go:199] Can't get CPU or zone information for multinode-022829-m03 node
	W0114 10:32:37.559344       1 topologycache.go:199] Can't get CPU or zone information for multinode-022829-m03 node
	W0114 10:32:37.560880       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-022829-m02" does not exist
	I0114 10:32:37.567122       1 range_allocator.go:367] Set node multinode-022829-m02 PodCIDR to [10.244.1.0/24]
	I0114 10:33:04.131689       1 event.go:294] "Event occurred" object="multinode-022829-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-022829-m03 status is now: NodeNotReady"
	W0114 10:33:04.131882       1 topologycache.go:199] Can't get CPU or zone information for multinode-022829-m02 node
	I0114 10:33:04.135484       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-6bgqj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:33:04.141316       1 event.go:294] "Event occurred" object="kube-system/kindnet-2ffw5" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-controller-manager [9048785f4e90] <==
	* I0114 10:29:20.039079       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0114 10:29:34.893284       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-022829-m02" does not exist
	I0114 10:29:34.900370       1 range_allocator.go:367] Set node multinode-022829-m02 PodCIDR to [10.244.1.0/24]
	I0114 10:29:34.902049       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-crlwb"
	I0114 10:29:34.902149       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7p92j"
	W0114 10:29:35.041669       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-022829-m02. Assuming now as a timestamp.
	I0114 10:29:35.041898       1 event.go:294] "Event occurred" object="multinode-022829-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-022829-m02 event: Registered Node multinode-022829-m02 in Controller"
	W0114 10:29:55.215232       1 topologycache.go:199] Can't get CPU or zone information for multinode-022829-m02 node
	I0114 10:29:57.788663       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-65db55d5d6 to 2"
	I0114 10:29:57.793228       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-tqh8p"
	I0114 10:29:57.796462       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-586cr"
	I0114 10:30:00.052224       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-tqh8p" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-tqh8p"
	W0114 10:30:18.942220       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-022829-m03" does not exist
	W0114 10:30:18.942316       1 topologycache.go:199] Can't get CPU or zone information for multinode-022829-m02 node
	I0114 10:30:18.945439       1 range_allocator.go:367] Set node multinode-022829-m03 PodCIDR to [10.244.2.0/24]
	I0114 10:30:18.948457       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6bgqj"
	I0114 10:30:18.950898       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2ffw5"
	W0114 10:30:20.055373       1 node_lifecycle_controller.go:1058] Missing timestamp for Node multinode-022829-m03. Assuming now as a timestamp.
	I0114 10:30:20.055554       1 event.go:294] "Event occurred" object="multinode-022829-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-022829-m03 event: Registered Node multinode-022829-m03 in Controller"
	W0114 10:30:29.362473       1 topologycache.go:199] Can't get CPU or zone information for multinode-022829-m02 node
	W0114 10:31:06.607859       1 topologycache.go:199] Can't get CPU or zone information for multinode-022829-m02 node
	W0114 10:31:07.390097       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-022829-m03" does not exist
	W0114 10:31:07.390174       1 topologycache.go:199] Can't get CPU or zone information for multinode-022829-m02 node
	I0114 10:31:07.395772       1 range_allocator.go:367] Set node multinode-022829-m03 PodCIDR to [10.244.3.0/24]
	W0114 10:31:17.640871       1 topologycache.go:199] Can't get CPU or zone information for multinode-022829-m02 node
	
	* 
	* ==> kube-proxy [ed7a47472cbc] <==
	* I0114 10:29:11.471642       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0114 10:29:11.471715       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0114 10:29:11.471757       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:29:11.491262       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:29:11.491305       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0114 10:29:11.491312       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0114 10:29:11.491320       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0114 10:29:11.491334       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:29:11.491495       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:29:11.491761       1 server.go:661] "Version info" version="v1.25.3"
	I0114 10:29:11.491795       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:29:11.492257       1 config.go:317] "Starting service config controller"
	I0114 10:29:11.492290       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:29:11.492306       1 config.go:444] "Starting node config controller"
	I0114 10:29:11.492313       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:29:11.492331       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:29:11.492334       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:29:11.593426       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0114 10:29:11.593546       1 shared_informer.go:262] Caches are synced for node config
	I0114 10:29:11.593560       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [f24b54fc2352] <==
	* I0114 10:32:13.337829       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0114 10:32:13.337924       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0114 10:32:13.337971       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:32:13.430272       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:32:13.430294       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0114 10:32:13.430300       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0114 10:32:13.430314       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0114 10:32:13.430335       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:32:13.430443       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:32:13.430571       1 server.go:661] "Version info" version="v1.25.3"
	I0114 10:32:13.430579       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:32:13.431449       1 config.go:317] "Starting service config controller"
	I0114 10:32:13.431459       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:32:13.431477       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:32:13.431480       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:32:13.431765       1 config.go:444] "Starting node config controller"
	I0114 10:32:13.431770       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:32:13.532586       1 shared_informer.go:262] Caches are synced for node config
	I0114 10:32:13.532615       1 shared_informer.go:262] Caches are synced for service config
	I0114 10:32:13.532710       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [22eb2357fc11] <==
	* E0114 10:28:54.329212       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0114 10:28:55.167369       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0114 10:28:55.167471       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0114 10:28:55.209602       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0114 10:28:55.209664       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0114 10:28:55.226758       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0114 10:28:55.226849       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0114 10:28:55.255544       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0114 10:28:55.255704       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0114 10:28:55.321114       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0114 10:28:55.321160       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0114 10:28:55.416028       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0114 10:28:55.416090       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0114 10:28:55.441625       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0114 10:28:55.441707       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0114 10:28:55.447403       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0114 10:28:55.447448       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0114 10:28:55.471517       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0114 10:28:55.471563       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0114 10:28:55.760052       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0114 10:28:55.760074       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0114 10:28:57.625925       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 10:31:21.320946       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0114 10:31:21.321539       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0114 10:31:21.321600       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [91c371b27d02] <==
	* I0114 10:32:08.542389       1 serving.go:348] Generated self-signed cert in-memory
	W0114 10:32:10.942268       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0114 10:32:10.942335       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0114 10:32:10.942359       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0114 10:32:10.942372       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0114 10:32:10.951963       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0114 10:32:10.952002       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:32:10.953081       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0114 10:32:10.953131       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0114 10:32:10.953141       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 10:32:10.953148       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 10:32:11.054080       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-14 10:31:58 UTC, end at Sat 2023-01-14 10:34:59 UTC. --
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.764750    1232 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.764844    1232 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.764905    1232 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.764941    1232 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843383    1232 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv5mb\" (UniqueName: \"kubernetes.io/projected/f6acf6b8-0d1e-4694-85de-f70fb0bcfee7-kube-api-access-kv5mb\") pod \"kube-proxy-pplrc\" (UID: \"f6acf6b8-0d1e-4694-85de-f70fb0bcfee7\") " pod="kube-system/kube-proxy-pplrc"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843501    1232 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb280495-e617-461c-a259-e28b47f301d6-xtables-lock\") pod \"kindnet-pqh2t\" (UID: \"cb280495-e617-461c-a259-e28b47f301d6\") " pod="kube-system/kindnet-pqh2t"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843538    1232 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f6acf6b8-0d1e-4694-85de-f70fb0bcfee7-kube-proxy\") pod \"kube-proxy-pplrc\" (UID: \"f6acf6b8-0d1e-4694-85de-f70fb0bcfee7\") " pod="kube-system/kube-proxy-pplrc"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843564    1232 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6acf6b8-0d1e-4694-85de-f70fb0bcfee7-xtables-lock\") pod \"kube-proxy-pplrc\" (UID: \"f6acf6b8-0d1e-4694-85de-f70fb0bcfee7\") " pod="kube-system/kube-proxy-pplrc"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843588    1232 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb280495-e617-461c-a259-e28b47f301d6-lib-modules\") pod \"kindnet-pqh2t\" (UID: \"cb280495-e617-461c-a259-e28b47f301d6\") " pod="kube-system/kindnet-pqh2t"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843615    1232 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxxgp\" (UniqueName: \"kubernetes.io/projected/cb280495-e617-461c-a259-e28b47f301d6-kube-api-access-bxxgp\") pod \"kindnet-pqh2t\" (UID: \"cb280495-e617-461c-a259-e28b47f301d6\") " pod="kube-system/kindnet-pqh2t"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843647    1232 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkrxr\" (UniqueName: \"kubernetes.io/projected/29960f5b-1391-43dd-9ebb-93c76a894fa2-kube-api-access-tkrxr\") pod \"storage-provisioner\" (UID: \"29960f5b-1391-43dd-9ebb-93c76a894fa2\") " pod="kube-system/storage-provisioner"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843674    1232 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6acf6b8-0d1e-4694-85de-f70fb0bcfee7-lib-modules\") pod \"kube-proxy-pplrc\" (UID: \"f6acf6b8-0d1e-4694-85de-f70fb0bcfee7\") " pod="kube-system/kube-proxy-pplrc"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843704    1232 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cb280495-e617-461c-a259-e28b47f301d6-cni-cfg\") pod \"kindnet-pqh2t\" (UID: \"cb280495-e617-461c-a259-e28b47f301d6\") " pod="kube-system/kindnet-pqh2t"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843732    1232 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/29960f5b-1391-43dd-9ebb-93c76a894fa2-tmp\") pod \"storage-provisioner\" (UID: \"29960f5b-1391-43dd-9ebb-93c76a894fa2\") " pod="kube-system/storage-provisioner"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843759    1232 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcmdt\" (UniqueName: \"kubernetes.io/projected/8ba9cbef-253e-46ad-aa78-55875dc5939b-kube-api-access-hcmdt\") pod \"coredns-565d847f94-xg88j\" (UID: \"8ba9cbef-253e-46ad-aa78-55875dc5939b\") " pod="kube-system/coredns-565d847f94-xg88j"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843790    1232 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vntvj\" (UniqueName: \"kubernetes.io/projected/4705768a-d0d5-4c2c-bef7-e967f40a5ba8-kube-api-access-vntvj\") pod \"busybox-65db55d5d6-586cr\" (UID: \"4705768a-d0d5-4c2c-bef7-e967f40a5ba8\") " pod="default/busybox-65db55d5d6-586cr"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843827    1232 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ba9cbef-253e-46ad-aa78-55875dc5939b-config-volume\") pod \"coredns-565d847f94-xg88j\" (UID: \"8ba9cbef-253e-46ad-aa78-55875dc5939b\") " pod="kube-system/coredns-565d847f94-xg88j"
	Jan 14 10:32:11 multinode-022829 kubelet[1232]: I0114 10:32:11.843872    1232 reconciler.go:169] "Reconciler: start to sync state"
	Jan 14 10:32:13 multinode-022829 kubelet[1232]: I0114 10:32:13.115007    1232 request.go:682] Waited for 1.16756063s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token
	Jan 14 10:32:13 multinode-022829 kubelet[1232]: I0114 10:32:13.329977    1232 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="78c89eb9eb0782fd9ba891e36d8a79450b3fc191fadadb7e531b2b70bc9de9e3"
	Jan 14 10:32:13 multinode-022829 kubelet[1232]: I0114 10:32:13.717531    1232 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="00eea7b06a7ea8414cc6cb650e2abc342682e2ceaeb51c0a1539868fc90414ad"
	Jan 14 10:32:42 multinode-022829 kubelet[1232]: I0114 10:32:42.971919    1232 scope.go:115] "RemoveContainer" containerID="4b8fe186dcad81eb12cc615fdd3c0840c08dc4073cd30807d264da8568b505bd"
	Jan 14 10:32:42 multinode-022829 kubelet[1232]: I0114 10:32:42.972104    1232 scope.go:115] "RemoveContainer" containerID="75564d6b4d6379eafdc84e4331f5adef7ff076f121b54d65c6efb7e51014e4b0"
	Jan 14 10:32:42 multinode-022829 kubelet[1232]: E0114 10:32:42.972251    1232 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(29960f5b-1391-43dd-9ebb-93c76a894fa2)\"" pod="kube-system/storage-provisioner" podUID=29960f5b-1391-43dd-9ebb-93c76a894fa2
	Jan 14 10:32:57 multinode-022829 kubelet[1232]: I0114 10:32:57.929243    1232 scope.go:115] "RemoveContainer" containerID="75564d6b4d6379eafdc84e4331f5adef7ff076f121b54d65c6efb7e51014e4b0"
	
	* 
	* ==> storage-provisioner [75564d6b4d63] <==
	* I0114 10:32:12.673174       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0114 10:32:42.676479       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [fd0ffbefdb59] <==
	* I0114 10:32:58.019769       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0114 10:32:58.026802       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0114 10:32:58.026869       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0114 10:33:15.422000       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0114 10:33:15.422155       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-022829_74934db5-6ece-44c2-98b2-12115bf46e02!
	I0114 10:33:15.422298       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6907ce21-09b1-40ca-ba18-002bb46753de", APIVersion:"v1", ResourceVersion:"888", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-022829_74934db5-6ece-44c2-98b2-12115bf46e02 became leader
	I0114 10:33:15.522603       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-022829_74934db5-6ece-44c2-98b2-12115bf46e02!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-022829 -n multinode-022829
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-022829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-65db55d5d6-qnfct
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-022829 describe pod busybox-65db55d5d6-qnfct
helpers_test.go:280: (dbg) kubectl --context multinode-022829 describe pod busybox-65db55d5d6-qnfct:

                                                
                                                
-- stdout --
	Name:             busybox-65db55d5d6-qnfct
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             multinode-022829-m03/
	Labels:           app=busybox
	                  pod-template-hash=65db55d5d6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-65db55d5d6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-flfcf (ro)
	Conditions:
	  Type           Status
	  PodScheduled   True 
	Volumes:
	  kube-api-access-flfcf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m26s  default-scheduler  Successfully assigned default/busybox-65db55d5d6-qnfct to multinode-022829-m03

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (219.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (63.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3893132018.exe start -p running-upgrade-025006 --memory=2200 --vm-driver=docker 
E0114 02:50:42.256993    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3893132018.exe start -p running-upgrade-025006 --memory=2200 --vm-driver=docker : exit status 70 (47.10004229s)

                                                
                                                
-- stdout --
	* [running-upgrade-025006] minikube v1.9.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1144227293
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 10:50:33.670340434 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-025006" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 10:50:53.133341557 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-025006", then "minikube start -p running-upgrade-025006 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 140.40 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 871.01 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 5.33 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 12.91 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 21.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 33.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 42.37 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.59 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 63.76 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 75.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 84.20 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 93.26 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 105.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 115.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 139.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 149.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 161.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 173.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 182.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 195.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 205.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 216.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 229.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 238.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 250.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 260.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 271.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 284.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 290.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 297.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 308.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 316.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 348.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 353.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 366.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 376.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 387.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 398.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 408.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 416.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 426.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 438.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 449.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 458.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 468.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 478.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 490.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 503.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 510.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 520.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 530.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 541.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 10:50:53.133341557 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3893132018.exe start -p running-upgrade-025006 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3893132018.exe start -p running-upgrade-025006 --memory=2200 --vm-driver=docker : exit status 70 (4.303391985s)

                                                
                                                
-- stdout --
	* [running-upgrade-025006] minikube v1.9.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1034338451
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-025006" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3893132018.exe start -p running-upgrade-025006 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3893132018.exe start -p running-upgrade-025006 --memory=2200 --vm-driver=docker : exit status 70 (4.414182285s)

                                                
                                                
-- stdout --
	* [running-upgrade-025006] minikube v1.9.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig415701134
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-025006" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-01-14 02:51:07.013416 -0800 PST m=+2746.298954591
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-025006
helpers_test.go:235: (dbg) docker inspect running-upgrade-025006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "636f5630e915f53b64de1609f29952d0326fa7735d9f544396627d07ba3fd9dd",
	        "Created": "2023-01-14T10:50:41.860931332Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 169079,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T10:50:42.071020847Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/636f5630e915f53b64de1609f29952d0326fa7735d9f544396627d07ba3fd9dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/636f5630e915f53b64de1609f29952d0326fa7735d9f544396627d07ba3fd9dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/636f5630e915f53b64de1609f29952d0326fa7735d9f544396627d07ba3fd9dd/hosts",
	        "LogPath": "/var/lib/docker/containers/636f5630e915f53b64de1609f29952d0326fa7735d9f544396627d07ba3fd9dd/636f5630e915f53b64de1609f29952d0326fa7735d9f544396627d07ba3fd9dd-json.log",
	        "Name": "/running-upgrade-025006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "running-upgrade-025006:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bb7fd608033b3fc9cf0b69da56d207dc2a9dd0931dac93c0fe805fa0c66957b9-init/diff:/var/lib/docker/overlay2/d9e0372027d4333f5dfc0260dc68a1ef91bfcf7d5f5dff141a717545493ad065/diff:/var/lib/docker/overlay2/46081d4fb25a1a237bec1d8b89142bd952d9a8ff642dc579eeb356856b3dc8f6/diff:/var/lib/docker/overlay2/eb8ce44283821701025459292279f6b660732954d70c28df16a5e34c5d1ee092/diff:/var/lib/docker/overlay2/374900f896c926260c57188b12e8004bbdeb9a35b372541753d25218b7ca4a49/diff:/var/lib/docker/overlay2/64d9423cd618ede75c9ec11b1e40f8936a2d9da7469dcd147c73b0c79314810e/diff:/var/lib/docker/overlay2/f09f1c250550837a5a081fdfb60d64a85956bc878c521067af6674d12588d9c6/diff:/var/lib/docker/overlay2/1bc8eb0fac0a908b4186d4606052ee443a47977f5b3c3b24901f17432bca2123/diff:/var/lib/docker/overlay2/e4a65e0de54c70dd035902d2b48fd4522d689efc3d4adb1d6a7c7e3c66663b75/diff:/var/lib/docker/overlay2/d1dd7d1bcda554415df7eb487329ae4cd88b1dafd1ca8370c77359bd9e890fc4/diff:/var/lib/docker/overlay2/c3abbd
bbc845f336a493a947a15ae79e8a7332c0a95294c02182b983a80ada3c/diff:/var/lib/docker/overlay2/8c241dc16e96a8e06e950dae445df572495725ed987a37a60a0d0aa6356af65f/diff:/var/lib/docker/overlay2/4346c678d640d3c7b956f2ac5e9a9b79402dc7681c38c3b1f39282863407d785/diff:/var/lib/docker/overlay2/961f1824ebaaf19cfbf85968119412950aeb0f2e10fc4f27696105167c943f97/diff:/var/lib/docker/overlay2/351c2895fcfe559e893dc1b96a95b91a611bebbe4185fad4b356163e0d53e0a4/diff:/var/lib/docker/overlay2/75541cc507d5ef571abe82555fbeabb82cda190d37788579def271183baef953/diff:/var/lib/docker/overlay2/262ff3966059c3410227ffadb65e17ec76f47f8ca8af6c5b335324c0e8dc82f1/diff:/var/lib/docker/overlay2/9e45f365e6a8d120e6856b0f4ee4ef3d08632d0c2030373b57671008238f4c9f/diff:/var/lib/docker/overlay2/9d313d6638db8ca67fde08c40aabf92765fcd43884b3b16527b5e97ee9481ba5/diff:/var/lib/docker/overlay2/1dd6c3e9fb55b6e0e8e620ab5f0b619d668e72f21fce99308f0ac9b583353cf7/diff:/var/lib/docker/overlay2/4cbf7e516424b9114144c6b3c498b6615fa96d4c548bfbfd45c308e8bdf992e3/diff:/var/lib/d
ocker/overlay2/39aa6d3378f9b18db0d84015623602a66f5abb424e568220a533eac5821912c6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb7fd608033b3fc9cf0b69da56d207dc2a9dd0931dac93c0fe805fa0c66957b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb7fd608033b3fc9cf0b69da56d207dc2a9dd0931dac93c0fe805fa0c66957b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb7fd608033b3fc9cf0b69da56d207dc2a9dd0931dac93c0fe805fa0c66957b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-025006",
	                "Source": "/var/lib/docker/volumes/running-upgrade-025006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-025006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-025006",
	                "name.minikube.sigs.k8s.io": "running-upgrade-025006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "718ff40599d7696330e4215ae73ed5f335fe2b670114b115076676c163032ceb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52594"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52595"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52596"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/718ff40599d7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "4337409ae0af666b9cf27078a9992296b72ca1519c367ad2734d29fe189fd0c5",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "5d3333edc5816d83f58b9a16a9d3c8b9bad2c1b09e956286348d6a0d2906ce7d",
	                    "EndpointID": "4337409ae0af666b9cf27078a9992296b72ca1519c367ad2734d29fe189fd0c5",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-025006 -n running-upgrade-025006
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-025006 -n running-upgrade-025006: exit status 6 (386.031509ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 02:51:07.446609   13704 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-025006" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-025006" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-025006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-025006
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-025006: (2.316445802s)
--- FAIL: TestRunningBinaryUpgrade (63.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (566.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-024716 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0114 02:47:58.409062    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 02:47:58.415479    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 02:47:58.425887    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 02:47:58.447238    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 02:47:58.488258    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 02:47:58.569587    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 02:47:58.730984    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 02:47:59.051575    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 02:47:59.691876    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 02:48:00.972556    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 02:48:03.532991    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 02:48:08.653584    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 02:48:18.893771    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-024716 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m9.659073262s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-024716] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-024716 in cluster kubernetes-upgrade-024716
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.21 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 02:47:16.403104   12729 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:47:16.403805   12729 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:47:16.403816   12729 out.go:309] Setting ErrFile to fd 2...
	I0114 02:47:16.403823   12729 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:47:16.404060   12729 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 02:47:16.404952   12729 out.go:303] Setting JSON to false
	I0114 02:47:16.425219   12729 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":2810,"bootTime":1673690426,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 02:47:16.425358   12729 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 02:47:16.468099   12729 out.go:177] * [kubernetes-upgrade-024716] minikube v1.28.0 on Darwin 13.0.1
	I0114 02:47:16.488962   12729 notify.go:220] Checking for updates...
	I0114 02:47:16.510284   12729 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 02:47:16.532277   12729 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:47:16.554005   12729 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:47:16.575438   12729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:47:16.597182   12729 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	I0114 02:47:16.619721   12729 config.go:180] Loaded profile config "cert-expiration-024457": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:47:16.619827   12729 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 02:47:16.679945   12729 docker.go:138] docker version: linux-20.10.21
	I0114 02:47:16.680083   12729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:47:16.820350   12729 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 10:47:16.729787026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:47:16.842310   12729 out.go:177] * Using the docker driver based on user configuration
	I0114 02:47:16.863915   12729 start.go:294] selected driver: docker
	I0114 02:47:16.863933   12729 start.go:838] validating driver "docker" against <nil>
	I0114 02:47:16.863982   12729 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 02:47:16.866487   12729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:47:17.006944   12729 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 10:47:16.915904594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:47:17.007081   12729 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 02:47:17.007220   12729 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0114 02:47:17.028473   12729 out.go:177] * Using Docker Desktop driver with root privileges
	I0114 02:47:17.049260   12729 cni.go:95] Creating CNI manager for ""
	I0114 02:47:17.049323   12729 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 02:47:17.049340   12729 start_flags.go:319] config:
	{Name:kubernetes-upgrade-024716 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-024716 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:47:17.087326   12729 out.go:177] * Starting control plane node kubernetes-upgrade-024716 in cluster kubernetes-upgrade-024716
	I0114 02:47:17.146308   12729 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 02:47:17.167185   12729 out.go:177] * Pulling base image ...
	I0114 02:47:17.209108   12729 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 02:47:17.209146   12729 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 02:47:17.209176   12729 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0114 02:47:17.209187   12729 cache.go:57] Caching tarball of preloaded images
	I0114 02:47:17.209310   12729 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 02:47:17.209321   12729 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0114 02:47:17.209696   12729 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/config.json ...
	I0114 02:47:17.209825   12729 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/config.json: {Name:mkca8bae9465b035e571eadfe90258694044e484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:47:17.264998   12729 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 02:47:17.265033   12729 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 02:47:17.265052   12729 cache.go:193] Successfully downloaded all kic artifacts
	I0114 02:47:17.265105   12729 start.go:364] acquiring machines lock for kubernetes-upgrade-024716: {Name:mk762df0d9b21fb27f39edb10bf3e1597ec98350 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 02:47:17.265266   12729 start.go:368] acquired machines lock for "kubernetes-upgrade-024716" in 148.796µs
	I0114 02:47:17.265297   12729 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-024716 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-024716 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 02:47:17.265357   12729 start.go:125] createHost starting for "" (driver="docker")
	I0114 02:47:17.308752   12729 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0114 02:47:17.309247   12729 start.go:159] libmachine.API.Create for "kubernetes-upgrade-024716" (driver="docker")
	I0114 02:47:17.309296   12729 client.go:168] LocalClient.Create starting
	I0114 02:47:17.309494   12729 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem
	I0114 02:47:17.309582   12729 main.go:134] libmachine: Decoding PEM data...
	I0114 02:47:17.309614   12729 main.go:134] libmachine: Parsing certificate...
	I0114 02:47:17.309722   12729 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem
	I0114 02:47:17.309785   12729 main.go:134] libmachine: Decoding PEM data...
	I0114 02:47:17.309802   12729 main.go:134] libmachine: Parsing certificate...
	I0114 02:47:17.310752   12729 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-024716 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0114 02:47:17.365991   12729 cli_runner.go:211] docker network inspect kubernetes-upgrade-024716 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0114 02:47:17.366109   12729 network_create.go:280] running [docker network inspect kubernetes-upgrade-024716] to gather additional debugging logs...
	I0114 02:47:17.366128   12729 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-024716
	W0114 02:47:17.420106   12729 cli_runner.go:211] docker network inspect kubernetes-upgrade-024716 returned with exit code 1
	I0114 02:47:17.420130   12729 network_create.go:283] error running [docker network inspect kubernetes-upgrade-024716]: docker network inspect kubernetes-upgrade-024716: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-024716
	I0114 02:47:17.420146   12729 network_create.go:285] output of [docker network inspect kubernetes-upgrade-024716]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-024716
	
	** /stderr **
	I0114 02:47:17.420240   12729 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 02:47:17.476890   12729 network.go:277] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000131e8] misses:0}
	I0114 02:47:17.476930   12729 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 02:47:17.476944   12729 network_create.go:123] attempt to create docker network kubernetes-upgrade-024716 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0114 02:47:17.477042   12729 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-024716 kubernetes-upgrade-024716
	W0114 02:47:17.531189   12729 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-024716 kubernetes-upgrade-024716 returned with exit code 1
	W0114 02:47:17.531240   12729 network_create.go:115] failed to create docker network kubernetes-upgrade-024716 192.168.49.0/24, will retry: subnet is taken
	I0114 02:47:17.531510   12729 network.go:268] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000131e8] amended:false}} dirty:map[] misses:0}
	I0114 02:47:17.531525   12729 network.go:213] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 02:47:17.531740   12729 network.go:277] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000131e8] amended:true}} dirty:map[192.168.49.0:0xc0000131e8 192.168.58.0:0xc0007981e0] misses:0}
	I0114 02:47:17.531755   12729 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 02:47:17.531765   12729 network_create.go:123] attempt to create docker network kubernetes-upgrade-024716 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0114 02:47:17.531852   12729 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-024716 kubernetes-upgrade-024716
	W0114 02:47:17.585757   12729 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-024716 kubernetes-upgrade-024716 returned with exit code 1
	W0114 02:47:17.585793   12729 network_create.go:115] failed to create docker network kubernetes-upgrade-024716 192.168.58.0/24, will retry: subnet is taken
	I0114 02:47:17.586086   12729 network.go:268] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000131e8] amended:true}} dirty:map[192.168.49.0:0xc0000131e8 192.168.58.0:0xc0007981e0] misses:1}
	I0114 02:47:17.586102   12729 network.go:213] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 02:47:17.586299   12729 network.go:277] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000131e8] amended:true}} dirty:map[192.168.49.0:0xc0000131e8 192.168.58.0:0xc0007981e0 192.168.67.0:0xc000ae8470] misses:1}
	I0114 02:47:17.586315   12729 network.go:210] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 02:47:17.586323   12729 network_create.go:123] attempt to create docker network kubernetes-upgrade-024716 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0114 02:47:17.586413   12729 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-024716 kubernetes-upgrade-024716
	W0114 02:47:17.641068   12729 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-024716 kubernetes-upgrade-024716 returned with exit code 1
	W0114 02:47:17.641104   12729 network_create.go:115] failed to create docker network kubernetes-upgrade-024716 192.168.67.0/24, will retry: subnet is taken
	I0114 02:47:17.641372   12729 network.go:268] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000131e8] amended:true}} dirty:map[192.168.49.0:0xc0000131e8 192.168.58.0:0xc0007981e0 192.168.67.0:0xc000ae8470] misses:2}
	I0114 02:47:17.641389   12729 network.go:213] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 02:47:17.641607   12729 network.go:277] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000131e8] amended:true}} dirty:map[192.168.49.0:0xc0000131e8 192.168.58.0:0xc0007981e0 192.168.67.0:0xc000ae8470 192.168.76.0:0xc000013408] misses:2}
	I0114 02:47:17.641623   12729 network.go:210] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 02:47:17.641630   12729 network_create.go:123] attempt to create docker network kubernetes-upgrade-024716 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0114 02:47:17.641715   12729 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-024716 kubernetes-upgrade-024716
	I0114 02:47:17.730031   12729 network_create.go:107] docker network kubernetes-upgrade-024716 192.168.76.0/24 created
	I0114 02:47:17.730063   12729 kic.go:117] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-024716" container
	I0114 02:47:17.730203   12729 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0114 02:47:17.788125   12729 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-024716 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-024716 --label created_by.minikube.sigs.k8s.io=true
	I0114 02:47:17.843033   12729 oci.go:103] Successfully created a docker volume kubernetes-upgrade-024716
	I0114 02:47:17.843182   12729 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-024716-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-024716 --entrypoint /usr/bin/test -v kubernetes-upgrade-024716:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0114 02:47:18.292375   12729 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-024716
	I0114 02:47:18.292419   12729 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 02:47:18.292435   12729 kic.go:190] Starting extracting preloaded images to volume ...
	I0114 02:47:18.292543   12729 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-024716:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0114 02:47:23.835078   12729 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-024716:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (5.542390385s)
	I0114 02:47:23.835103   12729 kic.go:199] duration metric: took 5.542601 seconds to extract preloaded images to volume
	I0114 02:47:23.835218   12729 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0114 02:47:23.974609   12729 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-024716 --name kubernetes-upgrade-024716 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-024716 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-024716 --network kubernetes-upgrade-024716 --ip 192.168.76.2 --volume kubernetes-upgrade-024716:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0114 02:47:24.323946   12729 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-024716 --format={{.State.Running}}
	I0114 02:47:24.383465   12729 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-024716 --format={{.State.Status}}
	I0114 02:47:24.445936   12729 cli_runner.go:164] Run: docker exec kubernetes-upgrade-024716 stat /var/lib/dpkg/alternatives/iptables
	I0114 02:47:24.556684   12729 oci.go:144] the created container "kubernetes-upgrade-024716" has a running status.
	I0114 02:47:24.556713   12729 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa...
	I0114 02:47:24.624288   12729 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0114 02:47:24.732207   12729 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-024716 --format={{.State.Status}}
	I0114 02:47:24.792809   12729 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0114 02:47:24.792827   12729 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-024716 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0114 02:47:24.899280   12729 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-024716 --format={{.State.Status}}
	I0114 02:47:24.956355   12729 machine.go:88] provisioning docker machine ...
	I0114 02:47:24.956397   12729 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-024716"
	I0114 02:47:24.956529   12729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:47:25.013747   12729 main.go:134] libmachine: Using SSH client type: native
	I0114 02:47:25.013936   12729 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52458 <nil> <nil>}
	I0114 02:47:25.013950   12729 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-024716 && echo "kubernetes-upgrade-024716" | sudo tee /etc/hostname
	I0114 02:47:25.140403   12729 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-024716
	
	I0114 02:47:25.140519   12729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:47:25.198266   12729 main.go:134] libmachine: Using SSH client type: native
	I0114 02:47:25.198428   12729 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52458 <nil> <nil>}
	I0114 02:47:25.198443   12729 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-024716' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-024716/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-024716' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 02:47:25.316269   12729 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 02:47:25.316296   12729 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
	I0114 02:47:25.316312   12729 ubuntu.go:177] setting up certificates
	I0114 02:47:25.316320   12729 provision.go:83] configureAuth start
	I0114 02:47:25.316411   12729 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-024716
	I0114 02:47:25.372861   12729 provision.go:138] copyHostCerts
	I0114 02:47:25.372956   12729 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
	I0114 02:47:25.372964   12729 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 02:47:25.373081   12729 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
	I0114 02:47:25.373307   12729 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
	I0114 02:47:25.373314   12729 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 02:47:25.373386   12729 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
	I0114 02:47:25.373530   12729 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
	I0114 02:47:25.373536   12729 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 02:47:25.373602   12729 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
	I0114 02:47:25.373718   12729 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-024716 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-024716]
	I0114 02:47:25.591676   12729 provision.go:172] copyRemoteCerts
	I0114 02:47:25.591856   12729 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 02:47:25.591974   12729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:47:25.704356   12729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52458 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa Username:docker}
	I0114 02:47:25.788322   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 02:47:25.807448   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0114 02:47:25.825647   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 02:47:25.843669   12729 provision.go:86] duration metric: configureAuth took 527.329461ms
	I0114 02:47:25.843683   12729 ubuntu.go:193] setting minikube options for container-runtime
	I0114 02:47:25.843843   12729 config.go:180] Loaded profile config "kubernetes-upgrade-024716": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0114 02:47:25.843933   12729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:47:25.903169   12729 main.go:134] libmachine: Using SSH client type: native
	I0114 02:47:25.903323   12729 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52458 <nil> <nil>}
	I0114 02:47:25.903337   12729 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 02:47:26.051971   12729 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 02:47:26.051985   12729 ubuntu.go:71] root file system type: overlay
	I0114 02:47:26.052167   12729 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 02:47:26.052268   12729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:47:26.111756   12729 main.go:134] libmachine: Using SSH client type: native
	I0114 02:47:26.111919   12729 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52458 <nil> <nil>}
	I0114 02:47:26.111978   12729 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 02:47:26.236373   12729 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 02:47:26.236490   12729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:47:26.294607   12729 main.go:134] libmachine: Using SSH client type: native
	I0114 02:47:26.294769   12729 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52458 <nil> <nil>}
	I0114 02:47:26.294788   12729 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 02:47:26.873224   12729 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-25 18:00:04.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 10:47:26.233716005 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0114 02:47:26.873246   12729 machine.go:91] provisioned docker machine in 1.916852378s
	I0114 02:47:26.873253   12729 client.go:171] LocalClient.Create took 9.56383894s
	I0114 02:47:26.873272   12729 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-024716" took 9.56391461s
	I0114 02:47:26.873284   12729 start.go:300] post-start starting for "kubernetes-upgrade-024716" (driver="docker")
	I0114 02:47:26.873288   12729 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 02:47:26.873364   12729 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 02:47:26.873456   12729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:47:26.931811   12729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52458 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa Username:docker}
	I0114 02:47:27.017698   12729 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 02:47:27.021375   12729 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 02:47:27.021393   12729 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 02:47:27.021400   12729 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 02:47:27.021409   12729 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 02:47:27.021423   12729 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
	I0114 02:47:27.021538   12729 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
	I0114 02:47:27.021734   12729 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
	I0114 02:47:27.021954   12729 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 02:47:27.030114   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:47:27.047718   12729 start.go:303] post-start completed in 174.416613ms
	I0114 02:47:27.048308   12729 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-024716
	I0114 02:47:27.106597   12729 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/config.json ...
	I0114 02:47:27.107128   12729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 02:47:27.107194   12729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:47:27.165006   12729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52458 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa Username:docker}
	I0114 02:47:27.249000   12729 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 02:47:27.253785   12729 start.go:128] duration metric: createHost completed in 9.98830056s
	I0114 02:47:27.253803   12729 start.go:83] releasing machines lock for "kubernetes-upgrade-024716", held for 9.988409546s
	I0114 02:47:27.253915   12729 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-024716
	I0114 02:47:27.310975   12729 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0114 02:47:27.310975   12729 ssh_runner.go:195] Run: cat /version.json
	I0114 02:47:27.311081   12729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:47:27.311087   12729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:47:27.373151   12729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52458 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa Username:docker}
	I0114 02:47:27.373751   12729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52458 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa Username:docker}
	I0114 02:47:27.738475   12729 ssh_runner.go:195] Run: systemctl --version
	I0114 02:47:27.743464   12729 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 02:47:27.753231   12729 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 02:47:27.753299   12729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 02:47:27.762549   12729 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 02:47:27.775374   12729 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 02:47:27.851452   12729 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 02:47:27.922668   12729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:47:27.994372   12729 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 02:47:28.229391   12729 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:47:28.259009   12729 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:47:28.309134   12729 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.21 ...
	I0114 02:47:28.309310   12729 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-024716 dig +short host.docker.internal
	I0114 02:47:28.425397   12729 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 02:47:28.425531   12729 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 02:47:28.429871   12729 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:47:28.439810   12729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:47:28.498291   12729 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 02:47:28.498376   12729 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 02:47:28.523285   12729 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0114 02:47:28.523308   12729 docker.go:543] Images already preloaded, skipping extraction
	I0114 02:47:28.523415   12729 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 02:47:28.546606   12729 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0114 02:47:28.546625   12729 cache_images.go:84] Images are preloaded, skipping loading
	I0114 02:47:28.546768   12729 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 02:47:28.616195   12729 cni.go:95] Creating CNI manager for ""
	I0114 02:47:28.616210   12729 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 02:47:28.616225   12729 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 02:47:28.616239   12729 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-024716 NodeName:kubernetes-upgrade-024716 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 02:47:28.616364   12729 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-024716"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-024716
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 02:47:28.616453   12729 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-024716 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-024716 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 02:47:28.616532   12729 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0114 02:47:28.624685   12729 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 02:47:28.624756   12729 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 02:47:28.632119   12729 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0114 02:47:28.644953   12729 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 02:47:28.658047   12729 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0114 02:47:28.671255   12729 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0114 02:47:28.675517   12729 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:47:28.685371   12729 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716 for IP: 192.168.76.2
	I0114 02:47:28.685528   12729 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
	I0114 02:47:28.685594   12729 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
	I0114 02:47:28.685645   12729 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/client.key
	I0114 02:47:28.685662   12729 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/client.crt with IP's: []
	I0114 02:47:28.761330   12729 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/client.crt ...
	I0114 02:47:28.761341   12729 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/client.crt: {Name:mkbfdfc400c1a2ccaf071a4d37a234ac837f32c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:47:28.761665   12729 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/client.key ...
	I0114 02:47:28.761673   12729 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/client.key: {Name:mke2fc62d85539607190c34c6753ea1021a376d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:47:28.761884   12729 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.key.31bdca25
	I0114 02:47:28.761913   12729 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0114 02:47:28.832290   12729 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.crt.31bdca25 ...
	I0114 02:47:28.832298   12729 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.crt.31bdca25: {Name:mk083488e365ea47b609a498c3f9890f972db0c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:47:28.832518   12729 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.key.31bdca25 ...
	I0114 02:47:28.832526   12729 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.key.31bdca25: {Name:mke38d0429222e7cd0ca4d5c9cc58824b26c7f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:47:28.832717   12729 certs.go:320] copying /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.crt
	I0114 02:47:28.832890   12729 certs.go:324] copying /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.key
	I0114 02:47:28.833057   12729 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/proxy-client.key
	I0114 02:47:28.833077   12729 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/proxy-client.crt with IP's: []
	I0114 02:47:29.026937   12729 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/proxy-client.crt ...
	I0114 02:47:29.026953   12729 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/proxy-client.crt: {Name:mk558c7d9d8be180d0b7b924ded7a929d4dcd99b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:47:29.027254   12729 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/proxy-client.key ...
	I0114 02:47:29.027263   12729 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/proxy-client.key: {Name:mk8028d2676846946b05a1a79115da184555a6d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:47:29.027725   12729 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
	W0114 02:47:29.027780   12729 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
	I0114 02:47:29.027799   12729 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 02:47:29.027839   12729 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
	I0114 02:47:29.027881   12729 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
	I0114 02:47:29.027922   12729 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
	I0114 02:47:29.027997   12729 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:47:29.028613   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 02:47:29.047959   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 02:47:29.065326   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 02:47:29.082807   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0114 02:47:29.100347   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 02:47:29.117494   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 02:47:29.134667   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 02:47:29.151597   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 02:47:29.168578   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
	I0114 02:47:29.185881   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 02:47:29.202988   12729 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
	I0114 02:47:29.220031   12729 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 02:47:29.233106   12729 ssh_runner.go:195] Run: openssl version
	I0114 02:47:29.239127   12729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 02:47:29.247551   12729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:47:29.251643   12729 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:47:29.251694   12729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:47:29.257124   12729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 02:47:29.265364   12729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
	I0114 02:47:29.273296   12729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
	I0114 02:47:29.277245   12729 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 02:47:29.277291   12729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
	I0114 02:47:29.282868   12729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
	I0114 02:47:29.291089   12729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
	I0114 02:47:29.299291   12729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
	I0114 02:47:29.303359   12729 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 02:47:29.303415   12729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
	I0114 02:47:29.308852   12729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 02:47:29.316768   12729 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-024716 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-024716 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:47:29.316885   12729 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 02:47:29.338619   12729 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 02:47:29.346431   12729 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 02:47:29.353982   12729 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 02:47:29.354043   12729 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 02:47:29.361330   12729 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 02:47:29.361359   12729 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 02:47:29.408176   12729 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0114 02:47:29.408220   12729 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 02:47:29.703355   12729 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 02:47:29.703452   12729 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 02:47:29.703569   12729 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 02:47:29.921792   12729 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 02:47:29.922524   12729 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 02:47:29.928808   12729 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0114 02:47:29.990012   12729 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 02:47:30.034201   12729 out.go:204]   - Generating certificates and keys ...
	I0114 02:47:30.034299   12729 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 02:47:30.034371   12729 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 02:47:30.179055   12729 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0114 02:47:30.367831   12729 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0114 02:47:30.690210   12729 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0114 02:47:30.813354   12729 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0114 02:47:30.924978   12729 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0114 02:47:30.925093   12729 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-024716 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0114 02:47:31.017312   12729 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0114 02:47:31.017426   12729 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-024716 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0114 02:47:31.097255   12729 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0114 02:47:31.165141   12729 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0114 02:47:31.342536   12729 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0114 02:47:31.342623   12729 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 02:47:31.465647   12729 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 02:47:31.655434   12729 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 02:47:31.738975   12729 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 02:47:31.843233   12729 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 02:47:31.843718   12729 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 02:47:31.865130   12729 out.go:204]   - Booting up control plane ...
	I0114 02:47:31.865291   12729 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 02:47:31.865443   12729 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 02:47:31.865578   12729 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 02:47:31.865724   12729 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 02:47:31.865945   12729 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 02:48:11.852398   12729 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 02:48:11.852864   12729 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:48:11.853103   12729 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:48:16.854627   12729 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:48:16.854847   12729 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:48:26.856485   12729 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:48:26.856696   12729 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:48:46.858562   12729 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:48:46.858751   12729 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:49:26.860396   12729 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:49:26.860635   12729 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:49:26.860646   12729 kubeadm.go:317] 
	I0114 02:49:26.860701   12729 kubeadm.go:317] Unfortunately, an error has occurred:
	I0114 02:49:26.860743   12729 kubeadm.go:317] 	timed out waiting for the condition
	I0114 02:49:26.860749   12729 kubeadm.go:317] 
	I0114 02:49:26.860779   12729 kubeadm.go:317] This error is likely caused by:
	I0114 02:49:26.860812   12729 kubeadm.go:317] 	- The kubelet is not running
	I0114 02:49:26.860933   12729 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0114 02:49:26.860947   12729 kubeadm.go:317] 
	I0114 02:49:26.861052   12729 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0114 02:49:26.861087   12729 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0114 02:49:26.861120   12729 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0114 02:49:26.861125   12729 kubeadm.go:317] 
	I0114 02:49:26.861240   12729 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0114 02:49:26.861331   12729 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0114 02:49:26.861415   12729 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0114 02:49:26.861469   12729 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0114 02:49:26.861560   12729 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0114 02:49:26.861595   12729 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0114 02:49:26.863842   12729 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0114 02:49:26.863943   12729 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0114 02:49:26.864037   12729 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 02:49:26.864107   12729 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0114 02:49:26.864162   12729 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0114 02:49:26.864315   12729 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-024716 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-024716 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-024716 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-024716 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0114 02:49:26.864348   12729 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0114 02:49:27.280296   12729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 02:49:27.289901   12729 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 02:49:27.289957   12729 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 02:49:27.297345   12729 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 02:49:27.297372   12729 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 02:49:27.345171   12729 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0114 02:49:27.345215   12729 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 02:49:27.643275   12729 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 02:49:27.643384   12729 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 02:49:27.643482   12729 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 02:49:27.866437   12729 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 02:49:27.867225   12729 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 02:49:27.873784   12729 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0114 02:49:27.943262   12729 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 02:49:27.964782   12729 out.go:204]   - Generating certificates and keys ...
	I0114 02:49:27.964894   12729 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 02:49:27.964966   12729 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 02:49:27.965041   12729 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 02:49:27.965107   12729 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0114 02:49:27.965201   12729 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0114 02:49:27.965265   12729 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0114 02:49:27.965347   12729 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0114 02:49:27.965408   12729 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0114 02:49:27.965576   12729 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 02:49:27.965691   12729 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 02:49:27.965770   12729 kubeadm.go:317] [certs] Using the existing "sa" key
	I0114 02:49:27.965811   12729 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 02:49:28.126806   12729 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 02:49:28.261893   12729 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 02:49:28.334289   12729 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 02:49:28.374701   12729 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 02:49:28.375327   12729 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 02:49:28.399771   12729 out.go:204]   - Booting up control plane ...
	I0114 02:49:28.400039   12729 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 02:49:28.400260   12729 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 02:49:28.400427   12729 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 02:49:28.400579   12729 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 02:49:28.400850   12729 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 02:50:08.384384   12729 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 02:50:08.385080   12729 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:50:08.385318   12729 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:50:13.386244   12729 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:50:13.386410   12729 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:50:23.387196   12729 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:50:23.387403   12729 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:50:43.388132   12729 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:50:43.388277   12729 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:51:23.389296   12729 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 02:51:23.389459   12729 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 02:51:23.389469   12729 kubeadm.go:317] 
	I0114 02:51:23.389534   12729 kubeadm.go:317] Unfortunately, an error has occurred:
	I0114 02:51:23.389587   12729 kubeadm.go:317] 	timed out waiting for the condition
	I0114 02:51:23.389601   12729 kubeadm.go:317] 
	I0114 02:51:23.389654   12729 kubeadm.go:317] This error is likely caused by:
	I0114 02:51:23.389682   12729 kubeadm.go:317] 	- The kubelet is not running
	I0114 02:51:23.389755   12729 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0114 02:51:23.389759   12729 kubeadm.go:317] 
	I0114 02:51:23.389838   12729 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0114 02:51:23.389868   12729 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0114 02:51:23.389910   12729 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0114 02:51:23.389916   12729 kubeadm.go:317] 
	I0114 02:51:23.389994   12729 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0114 02:51:23.390070   12729 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0114 02:51:23.390197   12729 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0114 02:51:23.390246   12729 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0114 02:51:23.390318   12729 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0114 02:51:23.390345   12729 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0114 02:51:23.392941   12729 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0114 02:51:23.393045   12729 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0114 02:51:23.393137   12729 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 02:51:23.393201   12729 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0114 02:51:23.393261   12729 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0114 02:51:23.393292   12729 kubeadm.go:398] StartCluster complete in 3m54.074749839s
	I0114 02:51:23.393376   12729 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 02:51:23.415925   12729 logs.go:274] 0 containers: []
	W0114 02:51:23.415938   12729 logs.go:276] No container was found matching "kube-apiserver"
	I0114 02:51:23.416023   12729 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 02:51:23.440009   12729 logs.go:274] 0 containers: []
	W0114 02:51:23.440021   12729 logs.go:276] No container was found matching "etcd"
	I0114 02:51:23.440087   12729 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 02:51:23.464706   12729 logs.go:274] 0 containers: []
	W0114 02:51:23.464719   12729 logs.go:276] No container was found matching "coredns"
	I0114 02:51:23.464806   12729 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 02:51:23.489828   12729 logs.go:274] 0 containers: []
	W0114 02:51:23.489843   12729 logs.go:276] No container was found matching "kube-scheduler"
	I0114 02:51:23.489927   12729 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 02:51:23.515133   12729 logs.go:274] 0 containers: []
	W0114 02:51:23.515153   12729 logs.go:276] No container was found matching "kube-proxy"
	I0114 02:51:23.515244   12729 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 02:51:23.541938   12729 logs.go:274] 0 containers: []
	W0114 02:51:23.541953   12729 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 02:51:23.542040   12729 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 02:51:23.565371   12729 logs.go:274] 0 containers: []
	W0114 02:51:23.565389   12729 logs.go:276] No container was found matching "storage-provisioner"
	I0114 02:51:23.565481   12729 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 02:51:23.590527   12729 logs.go:274] 0 containers: []
	W0114 02:51:23.590541   12729 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 02:51:23.590549   12729 logs.go:123] Gathering logs for describe nodes ...
	I0114 02:51:23.590555   12729 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 02:51:23.651383   12729 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 02:51:23.651396   12729 logs.go:123] Gathering logs for Docker ...
	I0114 02:51:23.651402   12729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 02:51:23.670463   12729 logs.go:123] Gathering logs for container status ...
	I0114 02:51:23.670480   12729 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 02:51:25.720973   12729 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050465011s)
	I0114 02:51:25.721112   12729 logs.go:123] Gathering logs for kubelet ...
	I0114 02:51:25.721121   12729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 02:51:25.760624   12729 logs.go:123] Gathering logs for dmesg ...
	I0114 02:51:25.760640   12729 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0114 02:51:25.774635   12729 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0114 02:51:25.774657   12729 out.go:239] * 
	* 
	W0114 02:51:25.774765   12729 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 02:51:25.774781   12729 out.go:239] * 
	* 
	W0114 02:51:25.775449   12729 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 02:51:25.837751   12729 out.go:177] 
	W0114 02:51:25.879802   12729 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 02:51:25.879911   12729 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0114 02:51:25.879976   12729 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0114 02:51:25.942593   12729 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-024716 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-024716
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-024716: (1.591388551s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-024716 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-024716 status --format={{.Host}}: exit status 7 (113.964494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-024716 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-024716 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker : (4m36.74041183s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-024716 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-024716 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-024716 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (584.354224ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-024716] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-024716
	    minikube start -p kubernetes-upgrade-024716 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0247162 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-024716 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-024716 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-024716 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker : (30.209473661s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2023-01-14 02:56:35.335451 -0800 PST m=+3074.618546500
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-024716
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-024716:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c35dacaa9d2fb697a36c267711ae6fffb5b81913a725d99ee8738cfdf12a0f61",
	        "Created": "2023-01-14T10:47:24.028372316Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 173173,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T10:51:29.059689675Z",
	            "FinishedAt": "2023-01-14T10:51:26.507513964Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/c35dacaa9d2fb697a36c267711ae6fffb5b81913a725d99ee8738cfdf12a0f61/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c35dacaa9d2fb697a36c267711ae6fffb5b81913a725d99ee8738cfdf12a0f61/hostname",
	        "HostsPath": "/var/lib/docker/containers/c35dacaa9d2fb697a36c267711ae6fffb5b81913a725d99ee8738cfdf12a0f61/hosts",
	        "LogPath": "/var/lib/docker/containers/c35dacaa9d2fb697a36c267711ae6fffb5b81913a725d99ee8738cfdf12a0f61/c35dacaa9d2fb697a36c267711ae6fffb5b81913a725d99ee8738cfdf12a0f61-json.log",
	        "Name": "/kubernetes-upgrade-024716",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-024716:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-024716",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/59347266b026065f7b2184299884f5d66c5b273a7cf8e881b362925e98455f3f-init/diff:/var/lib/docker/overlay2/74c9e0d36b5b0c73e7df7f4bce3bd0c3d02cf9dc383bffd6fbcff44769e0e62a/diff:/var/lib/docker/overlay2/ba601a6c163e2d067928a6364b090a9785c3dd2470d90823ce10e62a47aa569f/diff:/var/lib/docker/overlay2/80b54fffffd853e7ba8f14b1c1ac90a8b75fb31aafab2d53fe628cb592a95844/diff:/var/lib/docker/overlay2/02213d03e53450db4a2d492831eba720749d97435157430d240b760477b64c78/diff:/var/lib/docker/overlay2/e3727b5662aa5fdeeef9053112ad90fb2f9aaecbfeeddefa3efb066881ae1677/diff:/var/lib/docker/overlay2/685adc0695be0cb9862d43898ceae6e6a36c3cc98f04bc25e314797bed3b1d95/diff:/var/lib/docker/overlay2/7e133e132419c5ad6565f89b3ecfdf2c9fa038e5b9c39fe81c1269cfb6bb0d22/diff:/var/lib/docker/overlay2/c4d27ebf7e050a3aee0acccdadb92fc9390befadef2b0b13b9ebe87a2af3ef50/diff:/var/lib/docker/overlay2/0f07a86eba9c199451031724816d33cb5d2e19c401514edd8c1e392fd795f1e1/diff:/var/lib/docker/overlay2/a51cfe
8ee6145a30d356888e940bfdda67bc55c29f3972b35ae93dd989943b1c/diff:/var/lib/docker/overlay2/b155ac1a426201afe2af9fba8a7ebbecd3d8271f8613d0f53dac7bb190bc977f/diff:/var/lib/docker/overlay2/7c5cec64dde89a12b95bb1a0bca411b06b69201cfdb3cc4b46cb87a5bcff9a7f/diff:/var/lib/docker/overlay2/dd54bb055fc70a41daa3f3e950f4bdadd925db2c588d7d831edb4cbb176d30c7/diff:/var/lib/docker/overlay2/f58b39c756189e32d5b9c66b5c3861eabf5ab01ebc6179fec7210d414762bf45/diff:/var/lib/docker/overlay2/6458e00e4b79399a4860e78a572cd21fd47cbca2a54d189f34bd4a438145a6f5/diff:/var/lib/docker/overlay2/66427e9f49ff5383f9f819513857efb87ee3f880df33a86ac46ebc140ff172ed/diff:/var/lib/docker/overlay2/33f03d40d23c6a829c43633ba96c4058fbf09a4cf912eb51e0ca23a65574b0a7/diff:/var/lib/docker/overlay2/e68584e2b5a5a18fbd6edeeba6d80fe43e2199775b520878ca842d463078a2d1/diff:/var/lib/docker/overlay2/a2bfe134a89cb821f2c8e5ec6b42888d30fac6a9ed1aa4853476bb33cfe2e157/diff:/var/lib/docker/overlay2/f55951d7e041b300f9842916d51648285b79860a132d032d3c23b80af7c280fa/diff:/var/lib/d
ocker/overlay2/76cb0b8d6987165c472c0c9d54491045539294d203577a4ed7fac7f7cbbf0322/diff:/var/lib/docker/overlay2/a8f6d057d4938258302dd54e9a2e99732b4a2ac5c869366e93983e3e8890d432/diff:/var/lib/docker/overlay2/16bf4a461f9fe0edba90225f752527e534469b1bfbeb5bca6315512786340bfe/diff:/var/lib/docker/overlay2/2d022a51ddd598853537ff8fbeca5b94beff9d5d7e6ca81ffe011aa35121268a/diff:/var/lib/docker/overlay2/e30d56ebfba93be441f305b1938dd2d0f847f649922524ebef1fbe3e4b3b4bf9/diff:/var/lib/docker/overlay2/12df07bd2576a7b97f383aa3fcb2535f75a901953859063d9b65944d2dd0b152/diff:/var/lib/docker/overlay2/79e70748fe1267851a900b8bca2ab4e0b34e8163714fc440602d9e0273c93421/diff:/var/lib/docker/overlay2/c4fa6441d4ff7ce1be2072a8f61c5c495ff1785d9fee891191262b893a6eff63/diff:/var/lib/docker/overlay2/748980353d2fab0e6498a85b0c558d9eb7f34703302b21298c310b98dcf4d6f9/diff:/var/lib/docker/overlay2/48f823bc2f4741841d95ac4706f52fe9d01883bce998d5c999bdc363c838b1ee/diff:/var/lib/docker/overlay2/5f4f42c0e92359fc7ea2cf540120bd09407fd1d8dee5b56896919b39d3e
70033/diff:/var/lib/docker/overlay2/4a4066d1d0f42bb48af787d9f9bd115bacffde91f4ca8c20648dad3b25f904b6/diff:/var/lib/docker/overlay2/5f1054f553934c922e4dffc5c3804a5825ed249f7df9c3da31e2081145c8749a/diff:/var/lib/docker/overlay2/a6fe8ece465ba51837f6a88e28c3b571b632f0b223900278ac4a5f5dc0577520/diff:/var/lib/docker/overlay2/ee3e9af6d65fe9d2da423711b90ee171fd35422619c22b802d5fead4f861d921/diff:/var/lib/docker/overlay2/b353b985af8b2f665218f5af5e89cb642745824e2c3b51bfe3aa58c801823c46/diff:/var/lib/docker/overlay2/4411168ee372991c59d386d2ec200449c718a5343f5efa545ad9552a5c349310/diff:/var/lib/docker/overlay2/eeb668637d75a5802fe62d8a71458c68195302676ff09eb1e973d633e24e8588/diff:/var/lib/docker/overlay2/67b1dd580c0c0e994c4fe1233fef817d2c085438c80485c1f2eec64392c7b709/diff:/var/lib/docker/overlay2/1ae992d82b2e0a4c2a667c7d0d9e243efda7ee206e17c862bf093fa976667cc3/diff:/var/lib/docker/overlay2/ab6d393733a7abd2a9bd5612a0cef5adc3cded30c596c212828a8475c9c29779/diff:/var/lib/docker/overlay2/c927272ea82dc6bb318adcf8eb94099eece7af
9df7f454ff921048ba7ce589d2/diff:/var/lib/docker/overlay2/722309d1402eda210190af6c69b6f9998aff66e78e5bbc972ae865d10f0474d7/diff:/var/lib/docker/overlay2/c8a4e498ea2b5c051ced01db75d10e4ed1619bd3acc28c000789b600f8a7e23b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/59347266b026065f7b2184299884f5d66c5b273a7cf8e881b362925e98455f3f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/59347266b026065f7b2184299884f5d66c5b273a7cf8e881b362925e98455f3f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/59347266b026065f7b2184299884f5d66c5b273a7cf8e881b362925e98455f3f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-024716",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-024716/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-024716",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-024716",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-024716",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "757491ef86d4c69c51d3d441681c3f1bc9c951cce3539d23cbcc7f1273dd72e8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52691"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52687"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52688"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52689"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "52690"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/757491ef86d4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-024716": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c35dacaa9d2f",
	                        "kubernetes-upgrade-024716"
	                    ],
	                    "NetworkID": "53c9e541dcac8f36ae75f0702ac8ef75301ce2bc66617e377445e7345d201cd9",
	                    "EndpointID": "028f3e47fc32a95968f959779df2ca0eb9eacafcc687690284ebd62c518e9970",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-024716 -n kubernetes-upgrade-024716
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-024716 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-024716 logs -n 25: (2.996689168s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-025109                | pause-025109              | jenkins | v1.28.0 | 14 Jan 23 02:52 PST | 14 Jan 23 02:52 PST |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| delete  | -p pause-025109                | pause-025109              | jenkins | v1.28.0 | 14 Jan 23 02:52 PST | 14 Jan 23 02:52 PST |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| profile | list --output json             | minikube                  | jenkins | v1.28.0 | 14 Jan 23 02:52 PST | 14 Jan 23 02:52 PST |
	| delete  | -p pause-025109                | pause-025109              | jenkins | v1.28.0 | 14 Jan 23 02:52 PST | 14 Jan 23 02:52 PST |
	| start   | -p NoKubernetes-025238         | NoKubernetes-025238       | jenkins | v1.28.0 | 14 Jan 23 02:52 PST |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-025238         | NoKubernetes-025238       | jenkins | v1.28.0 | 14 Jan 23 02:52 PST | 14 Jan 23 02:53 PST |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-025238         | NoKubernetes-025238       | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-025238         | NoKubernetes-025238       | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
	| start   | -p NoKubernetes-025238         | NoKubernetes-025238       | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-025238 sudo    | NoKubernetes-025238       | jenkins | v1.28.0 | 14 Jan 23 02:53 PST |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| profile | list                           | minikube                  | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
	| profile | list --output=json             | minikube                  | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
	| stop    | -p NoKubernetes-025238         | NoKubernetes-025238       | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
	| start   | -p NoKubernetes-025238         | NoKubernetes-025238       | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-025238 sudo    | NoKubernetes-025238       | jenkins | v1.28.0 | 14 Jan 23 02:53 PST |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-025238         | NoKubernetes-025238       | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
	| start   | -p auto-024325 --memory=2048   | auto-024325               | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:54 PST |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m  |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p auto-024325 pgrep -a        | auto-024325               | jenkins | v1.28.0 | 14 Jan 23 02:54 PST | 14 Jan 23 02:54 PST |
	|         | kubelet                        |                           |         |         |                     |                     |
	| delete  | -p auto-024325                 | auto-024325               | jenkins | v1.28.0 | 14 Jan 23 02:54 PST | 14 Jan 23 02:54 PST |
	| start   | -p kindnet-024326              | kindnet-024326            | jenkins | v1.28.0 | 14 Jan 23 02:54 PST | 14 Jan 23 02:55 PST |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m  |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker  |                           |         |         |                     |                     |
	| ssh     | -p kindnet-024326 pgrep -a     | kindnet-024326            | jenkins | v1.28.0 | 14 Jan 23 02:55 PST | 14 Jan 23 02:55 PST |
	|         | kubelet                        |                           |         |         |                     |                     |
	| delete  | -p kindnet-024326              | kindnet-024326            | jenkins | v1.28.0 | 14 Jan 23 02:56 PST | 14 Jan 23 02:56 PST |
	| start   | -p kubernetes-upgrade-024716   | kubernetes-upgrade-024716 | jenkins | v1.28.0 | 14 Jan 23 02:56 PST |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p cilium-024326 --memory=2048 | cilium-024326             | jenkins | v1.28.0 | 14 Jan 23 02:56 PST |                     |
	|         | --alsologtostderr --wait=true  |                           |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=cilium |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-024716   | kubernetes-upgrade-024716 | jenkins | v1.28.0 | 14 Jan 23 02:56 PST | 14 Jan 23 02:56 PST |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 02:56:05
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 02:56:05.208582   15187 notify.go:220] Checking for updates...
	I0114 02:56:05.230343   15187 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 02:56:05.188169   15190 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:56:05.208479   15190 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:56:05.208505   15190 out.go:309] Setting ErrFile to fd 2...
	I0114 02:56:05.208519   15190 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:56:05.208807   15190 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 02:56:05.230901   15190 out.go:303] Setting JSON to false
	I0114 02:56:05.251477   15190 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":3339,"bootTime":1673690426,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 02:56:05.251593   15190 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 02:56:05.272427   15187 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:56:05.293386   15190 out.go:177] * [kubernetes-upgrade-024716] minikube v1.28.0 on Darwin 13.0.1
	I0114 02:56:05.356154   15187 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:56:05.393537   15190 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 02:56:05.415369   15187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:56:05.356464   15190 notify.go:220] Checking for updates...
	I0114 02:56:05.473559   15187 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	I0114 02:56:05.473563   15190 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:56:05.532181   15190 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:56:05.574287   15190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:56:05.616498   15190 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	I0114 02:56:05.496255   15187 config.go:180] Loaded profile config "kubernetes-upgrade-024716": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:56:05.496365   15187 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 02:56:05.649037   15187 docker.go:138] docker version: linux-20.10.21
	I0114 02:56:05.649235   15187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:56:05.798353   15187 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 10:56:05.700861908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:56:05.820221   15187 out.go:177] * Using the docker driver based on user configuration
	I0114 02:56:05.658535   15190 config.go:180] Loaded profile config "kubernetes-upgrade-024716": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:56:05.658958   15190 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 02:56:05.721215   15190 docker.go:138] docker version: linux-20.10.21
	I0114 02:56:05.721354   15190 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:56:05.867755   15190 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 10:56:05.774688855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:56:05.911800   15190 out.go:177] * Using the docker driver based on existing profile
	I0114 02:56:05.856724   15187 start.go:294] selected driver: docker
	I0114 02:56:05.856740   15187 start.go:838] validating driver "docker" against <nil>
	I0114 02:56:05.856765   15187 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 02:56:05.859250   15187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:56:06.035262   15187 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 10:56:05.936129596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:56:06.035385   15187 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 02:56:06.035544   15187 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 02:56:06.057126   15187 out.go:177] * Using Docker Desktop driver with root privileges
	I0114 02:56:06.077875   15187 cni.go:95] Creating CNI manager for "cilium"
	I0114 02:56:06.077895   15187 start_flags.go:314] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0114 02:56:06.077909   15187 start_flags.go:319] config:
	{Name:cilium-024326 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-024326 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:56:06.099109   15187 out.go:177] * Starting control plane node cilium-024326 in cluster cilium-024326
	I0114 02:56:05.932820   15190 start.go:294] selected driver: docker
	I0114 02:56:05.932864   15190 start.go:838] validating driver "docker" against &{Name:kubernetes-upgrade-024716 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-024716 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:56:05.933156   15190 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 02:56:05.937266   15190 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:56:06.135342   15190 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 10:56:05.989871732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:56:06.135528   15190 cni.go:95] Creating CNI manager for ""
	I0114 02:56:06.135548   15190 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 02:56:06.135621   15190 start_flags.go:319] config:
	{Name:kubernetes-upgrade-024716 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-024716 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmn
et/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:56:06.140960   15187 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 02:56:06.199241   15190 out.go:177] * Starting control plane node kubernetes-upgrade-024716 in cluster kubernetes-upgrade-024716
	I0114 02:56:06.273901   15187 out.go:177] * Pulling base image ...
	I0114 02:56:06.311036   15190 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 02:56:06.368797   15190 out.go:177] * Pulling base image ...
	I0114 02:56:06.347984   15187 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:56:06.348062   15187 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 02:56:06.348101   15187 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 02:56:06.348136   15187 cache.go:57] Caching tarball of preloaded images
	I0114 02:56:06.349293   15187 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 02:56:06.349341   15187 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 02:56:06.349571   15187 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/config.json ...
	I0114 02:56:06.349631   15187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/config.json: {Name:mkf2da355b20dd09975ce90ddd23a71f1f3983d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:56:06.406332   15187 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 02:56:06.406348   15187 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 02:56:06.406363   15187 cache.go:193] Successfully downloaded all kic artifacts
	I0114 02:56:06.406402   15187 start.go:364] acquiring machines lock for cilium-024326: {Name:mk67572411356ae511030e2754b0f10c526ff83a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 02:56:06.406557   15187 start.go:368] acquired machines lock for "cilium-024326" in 141.722µs
	I0114 02:56:06.406590   15187 start.go:93] Provisioning new machine with config: &{Name:cilium-024326 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-024326 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 02:56:06.406687   15187 start.go:125] createHost starting for "" (driver="docker")
	I0114 02:56:06.390062   15190 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:56:06.390101   15190 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 02:56:06.390117   15190 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 02:56:06.390127   15190 cache.go:57] Caching tarball of preloaded images
	I0114 02:56:06.390268   15190 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 02:56:06.390279   15190 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 02:56:06.390864   15190 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/config.json ...
	I0114 02:56:06.458804   15190 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 02:56:06.458819   15190 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 02:56:06.458837   15190 cache.go:193] Successfully downloaded all kic artifacts
	I0114 02:56:06.458882   15190 start.go:364] acquiring machines lock for kubernetes-upgrade-024716: {Name:mk762df0d9b21fb27f39edb10bf3e1597ec98350 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 02:56:06.458973   15190 start.go:368] acquired machines lock for "kubernetes-upgrade-024716" in 67.483µs
	I0114 02:56:06.458997   15190 start.go:96] Skipping create...Using existing machine configuration
	I0114 02:56:06.459008   15190 fix.go:55] fixHost starting: 
	I0114 02:56:06.459293   15190 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-024716 --format={{.State.Status}}
	I0114 02:56:06.520526   15190 fix.go:103] recreateIfNeeded on kubernetes-upgrade-024716: state=Running err=<nil>
	W0114 02:56:06.520556   15190 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 02:56:06.542307   15190 out.go:177] * Updating the running docker "kubernetes-upgrade-024716" container ...
	I0114 02:56:06.448922   15187 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0114 02:56:06.449460   15187 start.go:159] libmachine.API.Create for "cilium-024326" (driver="docker")
	I0114 02:56:06.449529   15187 client.go:168] LocalClient.Create starting
	I0114 02:56:06.449721   15187 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem
	I0114 02:56:06.449825   15187 main.go:134] libmachine: Decoding PEM data...
	I0114 02:56:06.449872   15187 main.go:134] libmachine: Parsing certificate...
	I0114 02:56:06.449992   15187 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem
	I0114 02:56:06.450061   15187 main.go:134] libmachine: Decoding PEM data...
	I0114 02:56:06.450091   15187 main.go:134] libmachine: Parsing certificate...
	I0114 02:56:06.450987   15187 cli_runner.go:164] Run: docker network inspect cilium-024326 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0114 02:56:06.513374   15187 cli_runner.go:211] docker network inspect cilium-024326 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0114 02:56:06.513484   15187 network_create.go:280] running [docker network inspect cilium-024326] to gather additional debugging logs...
	I0114 02:56:06.513501   15187 cli_runner.go:164] Run: docker network inspect cilium-024326
	W0114 02:56:06.591323   15187 cli_runner.go:211] docker network inspect cilium-024326 returned with exit code 1
	I0114 02:56:06.591347   15187 network_create.go:283] error running [docker network inspect cilium-024326]: docker network inspect cilium-024326: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-024326
	I0114 02:56:06.591361   15187 network_create.go:285] output of [docker network inspect cilium-024326]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-024326
	
	** /stderr **
	I0114 02:56:06.591458   15187 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 02:56:06.647560   15187 network.go:277] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000a926e0] misses:0}
	I0114 02:56:06.647603   15187 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 02:56:06.647619   15187 network_create.go:123] attempt to create docker network cilium-024326 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0114 02:56:06.647701   15187 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-024326 cilium-024326
	W0114 02:56:06.705581   15187 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-024326 cilium-024326 returned with exit code 1
	W0114 02:56:06.705617   15187 network_create.go:115] failed to create docker network cilium-024326 192.168.49.0/24, will retry: subnet is taken
	I0114 02:56:06.705878   15187 network.go:268] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a926e0] amended:false}} dirty:map[] misses:0}
	I0114 02:56:06.705893   15187 network.go:213] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 02:56:06.706111   15187 network.go:277] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a926e0] amended:true}} dirty:map[192.168.49.0:0xc000a926e0 192.168.58.0:0xc000a92718] misses:0}
	I0114 02:56:06.706124   15187 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 02:56:06.706133   15187 network_create.go:123] attempt to create docker network cilium-024326 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0114 02:56:06.706238   15187 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-024326 cilium-024326
	W0114 02:56:06.762099   15187 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-024326 cilium-024326 returned with exit code 1
	W0114 02:56:06.762145   15187 network_create.go:115] failed to create docker network cilium-024326 192.168.58.0/24, will retry: subnet is taken
	I0114 02:56:06.762418   15187 network.go:268] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a926e0] amended:true}} dirty:map[192.168.49.0:0xc000a926e0 192.168.58.0:0xc000a92718] misses:1}
	I0114 02:56:06.762433   15187 network.go:213] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 02:56:06.762639   15187 network.go:277] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000a926e0] amended:true}} dirty:map[192.168.49.0:0xc000a926e0 192.168.58.0:0xc000a92718 192.168.67.0:0xc00071b8d8] misses:1}
	I0114 02:56:06.762651   15187 network.go:210] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 02:56:06.762658   15187 network_create.go:123] attempt to create docker network cilium-024326 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0114 02:56:06.762758   15187 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-024326 cilium-024326
	I0114 02:56:06.862400   15187 network_create.go:107] docker network cilium-024326 192.168.67.0/24 created
	I0114 02:56:06.862432   15187 kic.go:117] calculated static IP "192.168.67.2" for the "cilium-024326" container
	I0114 02:56:06.862557   15187 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0114 02:56:06.925559   15187 cli_runner.go:164] Run: docker volume create cilium-024326 --label name.minikube.sigs.k8s.io=cilium-024326 --label created_by.minikube.sigs.k8s.io=true
	I0114 02:56:06.984817   15187 oci.go:103] Successfully created a docker volume cilium-024326
	I0114 02:56:06.984971   15187 cli_runner.go:164] Run: docker run --rm --name cilium-024326-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-024326 --entrypoint /usr/bin/test -v cilium-024326:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0114 02:56:07.451406   15187 oci.go:107] Successfully prepared a docker volume cilium-024326
	I0114 02:56:07.451441   15187 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:56:07.451455   15187 kic.go:190] Starting extracting preloaded images to volume ...
	I0114 02:56:07.451601   15187 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-024326:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0114 02:56:06.583927   15190 machine.go:88] provisioning docker machine ...
	I0114 02:56:06.583976   15190 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-024716"
	I0114 02:56:06.584147   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:06.645529   15190 main.go:134] libmachine: Using SSH client type: native
	I0114 02:56:06.645765   15190 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52691 <nil> <nil>}
	I0114 02:56:06.645776   15190 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-024716 && echo "kubernetes-upgrade-024716" | sudo tee /etc/hostname
	I0114 02:56:06.772191   15190 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-024716
	
	I0114 02:56:06.772303   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:06.836590   15190 main.go:134] libmachine: Using SSH client type: native
	I0114 02:56:06.836766   15190 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52691 <nil> <nil>}
	I0114 02:56:06.836784   15190 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-024716' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-024716/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-024716' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 02:56:06.959268   15190 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 02:56:06.959308   15190 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
	I0114 02:56:06.959333   15190 ubuntu.go:177] setting up certificates
	I0114 02:56:06.959346   15190 provision.go:83] configureAuth start
	I0114 02:56:06.959443   15190 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-024716
	I0114 02:56:07.022514   15190 provision.go:138] copyHostCerts
	I0114 02:56:07.022618   15190 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
	I0114 02:56:07.022628   15190 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 02:56:07.022761   15190 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
	I0114 02:56:07.022982   15190 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
	I0114 02:56:07.022988   15190 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 02:56:07.023091   15190 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
	I0114 02:56:07.023255   15190 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
	I0114 02:56:07.023266   15190 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 02:56:07.023347   15190 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
	I0114 02:56:07.023475   15190 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-024716 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-024716]
	I0114 02:56:07.308777   15190 provision.go:172] copyRemoteCerts
	I0114 02:56:07.308857   15190 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 02:56:07.308941   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:07.370368   15190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52691 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa Username:docker}
	I0114 02:56:07.459139   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 02:56:07.479984   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0114 02:56:07.501238   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0114 02:56:07.521134   15190 provision.go:86] duration metric: configureAuth took 561.771292ms
	I0114 02:56:07.521153   15190 ubuntu.go:193] setting minikube options for container-runtime
	I0114 02:56:07.521317   15190 config.go:180] Loaded profile config "kubernetes-upgrade-024716": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:56:07.521402   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:07.587630   15190 main.go:134] libmachine: Using SSH client type: native
	I0114 02:56:07.587810   15190 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52691 <nil> <nil>}
	I0114 02:56:07.587820   15190 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 02:56:07.710084   15190 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 02:56:07.710099   15190 ubuntu.go:71] root file system type: overlay
	I0114 02:56:07.710231   15190 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 02:56:07.710328   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:07.780884   15190 main.go:134] libmachine: Using SSH client type: native
	I0114 02:56:07.781141   15190 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52691 <nil> <nil>}
	I0114 02:56:07.781235   15190 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 02:56:07.913710   15190 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 02:56:07.913839   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:07.988726   15190 main.go:134] libmachine: Using SSH client type: native
	I0114 02:56:07.988902   15190 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 52691 <nil> <nil>}
	I0114 02:56:07.988917   15190 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 02:56:08.118360   15190 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 02:56:08.118388   15190 machine.go:91] provisioned docker machine in 1.534430039s
	I0114 02:56:08.118416   15190 start.go:300] post-start starting for "kubernetes-upgrade-024716" (driver="docker")
	I0114 02:56:08.118426   15190 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 02:56:08.118521   15190 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 02:56:08.118596   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:08.192951   15190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52691 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa Username:docker}
	I0114 02:56:08.284390   15190 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 02:56:08.290395   15190 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 02:56:08.290421   15190 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 02:56:08.290434   15190 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 02:56:08.290446   15190 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 02:56:08.290458   15190 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
	I0114 02:56:08.290621   15190 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
	I0114 02:56:08.290959   15190 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
	I0114 02:56:08.291309   15190 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 02:56:08.302124   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:56:08.330289   15190 start.go:303] post-start completed in 211.857849ms
	I0114 02:56:08.330394   15190 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 02:56:08.330474   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:08.406212   15190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52691 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa Username:docker}
	I0114 02:56:08.496391   15190 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 02:56:08.503297   15190 fix.go:57] fixHost completed within 2.044270114s
	I0114 02:56:08.503317   15190 start.go:83] releasing machines lock for "kubernetes-upgrade-024716", held for 2.044319295s
	I0114 02:56:08.503432   15190 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-024716
	I0114 02:56:08.577331   15190 ssh_runner.go:195] Run: cat /version.json
	I0114 02:56:08.577334   15190 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 02:56:08.577437   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:08.577436   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:08.655161   15190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52691 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa Username:docker}
	I0114 02:56:08.655690   15190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52691 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa Username:docker}
	I0114 02:56:08.745747   15190 ssh_runner.go:195] Run: systemctl --version
	I0114 02:56:08.804717   15190 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 02:56:08.818880   15190 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 02:56:08.818964   15190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 02:56:08.830810   15190 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 02:56:08.849785   15190 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 02:56:08.961652   15190 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 02:56:09.064736   15190 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:56:09.170798   15190 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 02:56:11.647832   15190 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.476994681s)
	I0114 02:56:11.647930   15190 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0114 02:56:11.740832   15190 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:56:11.837759   15190 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0114 02:56:11.857469   15190 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 02:56:11.857571   15190 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0114 02:56:11.867733   15190 start.go:472] Will wait 60s for crictl version
	I0114 02:56:11.867867   15190 ssh_runner.go:195] Run: which crictl
	I0114 02:56:11.873379   15190 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 02:56:11.924089   15190 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0114 02:56:11.924225   15190 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:56:11.985971   15190 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:56:12.071093   15190 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0114 02:56:12.071206   15190 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-024716 dig +short host.docker.internal
	I0114 02:56:12.252917   15190 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 02:56:12.253111   15190 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 02:56:12.260310   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:12.352312   15190 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:56:12.352443   15190 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 02:56:12.397333   15190 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0114 02:56:12.397357   15190 docker.go:543] Images already preloaded, skipping extraction
	I0114 02:56:12.397473   15190 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 02:56:12.437333   15190 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0114 02:56:12.437363   15190 cache_images.go:84] Images are preloaded, skipping loading
	I0114 02:56:12.437484   15190 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 02:56:12.572265   15190 cni.go:95] Creating CNI manager for ""
	I0114 02:56:12.572284   15190 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 02:56:12.572310   15190 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 02:56:12.572333   15190 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-024716 NodeName:kubernetes-upgrade-024716 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 02:56:12.572524   15190 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-024716"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 02:56:12.572642   15190 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-024716 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-024716 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 02:56:12.572728   15190 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 02:56:12.587564   15190 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 02:56:12.587682   15190 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 02:56:12.638231   15190 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (487 bytes)
	I0114 02:56:12.662843   15190 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 02:56:12.688211   15190 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes)
	I0114 02:56:12.738590   15190 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0114 02:56:12.745614   15190 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716 for IP: 192.168.76.2
	I0114 02:56:12.745788   15190 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
	I0114 02:56:12.745887   15190 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
	I0114 02:56:12.746014   15190 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/client.key
	I0114 02:56:12.746146   15190 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.key.31bdca25
	I0114 02:56:12.746284   15190 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/proxy-client.key
	I0114 02:56:12.746769   15190 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
	W0114 02:56:12.746883   15190 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
	I0114 02:56:12.746916   15190 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 02:56:12.746982   15190 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
	I0114 02:56:12.747030   15190 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
	I0114 02:56:12.747070   15190 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
	I0114 02:56:12.747161   15190 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:56:12.747806   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 02:56:12.783295   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 02:56:12.851764   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 02:56:12.874104   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0114 02:56:12.934965   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 02:56:12.958279   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 02:56:13.028386   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 02:56:13.057468   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 02:56:13.089120   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
	I0114 02:56:13.142590   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 02:56:13.168170   15190 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
	I0114 02:56:13.191760   15190 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 02:56:13.244094   15190 ssh_runner.go:195] Run: openssl version
	I0114 02:56:13.252080   15190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
	I0114 02:56:13.265563   15190 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
	I0114 02:56:13.270940   15190 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 02:56:13.271026   15190 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
	I0114 02:56:13.277845   15190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 02:56:13.289317   15190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 02:56:13.329503   15190 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:56:13.339233   15190 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:56:13.339326   15190 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:56:13.347387   15190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 02:56:13.360691   15190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
	I0114 02:56:13.372181   15190 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
	I0114 02:56:13.378081   15190 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 02:56:13.378187   15190 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
	I0114 02:56:13.385957   15190 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
	I0114 02:56:13.398468   15190 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-024716 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-024716 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:56:13.398612   15190 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 02:56:13.443053   15190 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 02:56:13.452658   15190 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 02:56:13.452682   15190 kubeadm.go:627] restartCluster start
	I0114 02:56:13.452760   15190 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 02:56:13.469456   15190 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:56:13.469553   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:13.547619   15190 kubeconfig.go:92] found "kubernetes-upgrade-024716" server: "https://127.0.0.1:52690"
	I0114 02:56:13.548454   15190 kapi.go:59] client config for kubernetes-upgrade-024716: &rest.Config{Host:"https://127.0.0.1:52690", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 02:56:13.549342   15190 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 02:56:13.564042   15190 api_server.go:165] Checking apiserver status ...
	I0114 02:56:13.564132   15190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:56:13.576460   15190 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/12586/cgroup
	W0114 02:56:13.586571   15190 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/12586/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:56:13.586654   15190 ssh_runner.go:195] Run: ls
	I0114 02:56:13.594074   15190 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52690/healthz ...
	I0114 02:56:15.098775   15187 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-024326:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (7.64704707s)
	I0114 02:56:15.098795   15187 kic.go:199] duration metric: took 7.647284 seconds to extract preloaded images to volume
	I0114 02:56:15.098918   15187 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0114 02:56:15.240278   15187 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-024326 --name cilium-024326 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-024326 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-024326 --network cilium-024326 --ip 192.168.67.2 --volume cilium-024326:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0114 02:56:15.614981   15187 cli_runner.go:164] Run: docker container inspect cilium-024326 --format={{.State.Running}}
	I0114 02:56:15.722834   15187 cli_runner.go:164] Run: docker container inspect cilium-024326 --format={{.State.Status}}
	I0114 02:56:15.785528   15187 cli_runner.go:164] Run: docker exec cilium-024326 stat /var/lib/dpkg/alternatives/iptables
	I0114 02:56:15.893652   15187 oci.go:144] the created container "cilium-024326" has a running status.
	I0114 02:56:15.893690   15187 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/cilium-024326/id_rsa...
	I0114 02:56:15.947967   15187 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/cilium-024326/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0114 02:56:16.060742   15187 cli_runner.go:164] Run: docker container inspect cilium-024326 --format={{.State.Status}}
	I0114 02:56:16.123783   15187 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0114 02:56:16.123804   15187 kic_runner.go:114] Args: [docker exec --privileged cilium-024326 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0114 02:56:16.232315   15187 cli_runner.go:164] Run: docker container inspect cilium-024326 --format={{.State.Status}}
	I0114 02:56:16.292339   15187 machine.go:88] provisioning docker machine ...
	I0114 02:56:16.292394   15187 ubuntu.go:169] provisioning hostname "cilium-024326"
	I0114 02:56:16.292502   15187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-024326
	I0114 02:56:16.351352   15187 main.go:134] libmachine: Using SSH client type: native
	I0114 02:56:16.351559   15187 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53233 <nil> <nil>}
	I0114 02:56:16.351578   15187 main.go:134] libmachine: About to run SSH command:
	sudo hostname cilium-024326 && echo "cilium-024326" | sudo tee /etc/hostname
	I0114 02:56:16.480207   15187 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-024326
	
	I0114 02:56:16.480327   15187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-024326
	I0114 02:56:16.539148   15187 main.go:134] libmachine: Using SSH client type: native
	I0114 02:56:16.539306   15187 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53233 <nil> <nil>}
	I0114 02:56:16.539320   15187 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-024326' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-024326/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-024326' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 02:56:16.656697   15187 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 02:56:16.656718   15187 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
	I0114 02:56:16.656750   15187 ubuntu.go:177] setting up certificates
	I0114 02:56:16.656765   15187 provision.go:83] configureAuth start
	I0114 02:56:16.656863   15187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-024326
	I0114 02:56:16.715891   15187 provision.go:138] copyHostCerts
	I0114 02:56:16.715990   15187 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
	I0114 02:56:16.715998   15187 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 02:56:16.716095   15187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
	I0114 02:56:16.716304   15187 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
	I0114 02:56:16.716310   15187 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 02:56:16.716373   15187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
	I0114 02:56:16.716518   15187 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
	I0114 02:56:16.716524   15187 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 02:56:16.716584   15187 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
	I0114 02:56:16.716730   15187 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.cilium-024326 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-024326]
	I0114 02:56:17.038323   15187 provision.go:172] copyRemoteCerts
	I0114 02:56:17.038391   15187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 02:56:17.038452   15187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-024326
	I0114 02:56:17.096098   15187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53233 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/cilium-024326/id_rsa Username:docker}
	I0114 02:56:17.182990   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 02:56:17.200015   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0114 02:56:17.217120   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 02:56:17.234140   15187 provision.go:86] duration metric: configureAuth took 577.358953ms
	I0114 02:56:17.234154   15187 ubuntu.go:193] setting minikube options for container-runtime
	I0114 02:56:17.234336   15187 config.go:180] Loaded profile config "cilium-024326": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:56:17.234422   15187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-024326
	I0114 02:56:17.293541   15187 main.go:134] libmachine: Using SSH client type: native
	I0114 02:56:17.293709   15187 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53233 <nil> <nil>}
	I0114 02:56:17.293723   15187 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 02:56:17.410626   15187 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 02:56:17.410643   15187 ubuntu.go:71] root file system type: overlay
	I0114 02:56:17.410779   15187 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 02:56:17.410879   15187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-024326
	I0114 02:56:17.469503   15187 main.go:134] libmachine: Using SSH client type: native
	I0114 02:56:17.469649   15187 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53233 <nil> <nil>}
	I0114 02:56:17.469704   15187 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 02:56:17.597922   15187 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 02:56:17.598026   15187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-024326
	I0114 02:56:17.656699   15187 main.go:134] libmachine: Using SSH client type: native
	I0114 02:56:17.656850   15187 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53233 <nil> <nil>}
	I0114 02:56:17.656864   15187 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 02:56:18.254286   15187 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-25 18:00:04.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 10:56:17.595468236 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0114 02:56:18.254307   15187 machine.go:91] provisioned docker machine in 1.961924979s
	I0114 02:56:18.254314   15187 client.go:171] LocalClient.Create took 11.804689397s
	I0114 02:56:18.254331   15187 start.go:167] duration metric: libmachine.API.Create for "cilium-024326" took 11.804789493s
	I0114 02:56:18.254339   15187 start.go:300] post-start starting for "cilium-024326" (driver="docker")
	I0114 02:56:18.254345   15187 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 02:56:18.254427   15187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 02:56:18.254509   15187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-024326
	I0114 02:56:18.313709   15187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53233 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/cilium-024326/id_rsa Username:docker}
	I0114 02:56:18.401363   15187 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 02:56:18.404874   15187 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 02:56:18.404888   15187 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 02:56:18.404895   15187 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 02:56:18.404905   15187 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 02:56:18.404916   15187 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
	I0114 02:56:18.405010   15187 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
	I0114 02:56:18.405172   15187 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
	I0114 02:56:18.405344   15187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 02:56:18.412784   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:56:18.430825   15187 start.go:303] post-start completed in 176.473894ms
	I0114 02:56:18.431612   15187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-024326
	I0114 02:56:18.490612   15187 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/config.json ...
	I0114 02:56:18.491131   15187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 02:56:18.491202   15187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-024326
	I0114 02:56:18.549154   15187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53233 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/cilium-024326/id_rsa Username:docker}
	I0114 02:56:18.632210   15187 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 02:56:18.636809   15187 start.go:128] duration metric: createHost completed in 12.230023223s
	I0114 02:56:18.636825   15187 start.go:83] releasing machines lock for "cilium-024326", held for 12.230168139s
	I0114 02:56:18.636919   15187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-024326
	I0114 02:56:18.696398   15187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 02:56:18.696410   15187 ssh_runner.go:195] Run: cat /version.json
	I0114 02:56:18.696481   15187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-024326
	I0114 02:56:18.696487   15187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-024326
	I0114 02:56:18.760573   15187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53233 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/cilium-024326/id_rsa Username:docker}
	I0114 02:56:18.760838   15187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53233 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/cilium-024326/id_rsa Username:docker}
	I0114 02:56:18.897307   15187 ssh_runner.go:195] Run: systemctl --version
	I0114 02:56:18.902220   15187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0114 02:56:18.909496   15187 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0114 02:56:18.922011   15187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:56:18.989092   15187 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0114 02:56:19.068507   15187 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 02:56:19.079082   15187 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 02:56:19.079157   15187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 02:56:19.088626   15187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 02:56:19.101986   15187 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 02:56:19.179015   15187 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 02:56:19.253430   15187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:56:19.321837   15187 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 02:56:19.523955   15187 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0114 02:56:19.592190   15187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 02:56:19.659440   15187 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0114 02:56:19.669419   15187 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 02:56:19.669509   15187 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0114 02:56:19.673791   15187 start.go:472] Will wait 60s for crictl version
	I0114 02:56:19.673844   15187 ssh_runner.go:195] Run: which crictl
	I0114 02:56:19.677828   15187 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 02:56:19.707989   15187 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0114 02:56:19.708090   15187 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:56:19.736128   15187 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 02:56:19.810385   15187 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0114 02:56:19.810609   15187 cli_runner.go:164] Run: docker exec -t cilium-024326 dig +short host.docker.internal
	I0114 02:56:19.964552   15187 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 02:56:19.964671   15187 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 02:56:19.970582   15187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:56:19.986398   15187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-024326
	I0114 02:56:18.594682   15190 api_server.go:268] stopped: https://127.0.0.1:52690/healthz: Get "https://127.0.0.1:52690/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0114 02:56:18.594725   15190 retry.go:31] will retry after 263.082536ms: state is "Stopped"
	I0114 02:56:18.858410   15190 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52690/healthz ...
	I0114 02:56:20.049995   15187 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:56:20.070595   15187 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 02:56:20.100213   15187 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 02:56:20.100235   15187 docker.go:543] Images already preloaded, skipping extraction
	I0114 02:56:20.100431   15187 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 02:56:20.125624   15187 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 02:56:20.125644   15187 cache_images.go:84] Images are preloaded, skipping loading
	I0114 02:56:20.125735   15187 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 02:56:20.198650   15187 cni.go:95] Creating CNI manager for "cilium"
	I0114 02:56:20.198707   15187 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 02:56:20.198723   15187 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-024326 NodeName:cilium-024326 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 02:56:20.198848   15187 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "cilium-024326"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 02:56:20.198954   15187 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cilium-024326 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:cilium-024326 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0114 02:56:20.199032   15187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 02:56:20.207170   15187 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 02:56:20.207268   15187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 02:56:20.214432   15187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
	I0114 02:56:20.227358   15187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 02:56:20.240684   15187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2035 bytes)
	I0114 02:56:20.253530   15187 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0114 02:56:20.257643   15187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 02:56:20.267232   15187 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326 for IP: 192.168.67.2
	I0114 02:56:20.267353   15187 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
	I0114 02:56:20.267419   15187 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
	I0114 02:56:20.267469   15187 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.key
	I0114 02:56:20.267485   15187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt with IP's: []
	I0114 02:56:20.350125   15187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt ...
	I0114 02:56:20.350145   15187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: {Name:mk639a2f62efd014a60a3a4a0cbd12352d415156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:56:20.350408   15187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.key ...
	I0114 02:56:20.350415   15187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.key: {Name:mk62ecc3de8abae095a0580a1ac09545960d31a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:56:20.350605   15187 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/apiserver.key.c7fa3a9e
	I0114 02:56:20.350625   15187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0114 02:56:20.392404   15187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/apiserver.crt.c7fa3a9e ...
	I0114 02:56:20.392414   15187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/apiserver.crt.c7fa3a9e: {Name:mkce291c2f26e4b9b2618b39d8e04d4ff4fffc51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:56:20.392650   15187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/apiserver.key.c7fa3a9e ...
	I0114 02:56:20.392658   15187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/apiserver.key.c7fa3a9e: {Name:mkf2e19d7e4683e799a27af9f5cdaa767e8f7fcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:56:20.392841   15187 certs.go:320] copying /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/apiserver.crt
	I0114 02:56:20.393024   15187 certs.go:324] copying /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/apiserver.key
	I0114 02:56:20.393204   15187 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/proxy-client.key
	I0114 02:56:20.393222   15187 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/proxy-client.crt with IP's: []
	I0114 02:56:20.468520   15187 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/proxy-client.crt ...
	I0114 02:56:20.468529   15187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/proxy-client.crt: {Name:mkc6afe814406cd0d8b428345f491fc8ad22e893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:56:20.468762   15187 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/proxy-client.key ...
	I0114 02:56:20.468769   15187 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/proxy-client.key: {Name:mk63a1030540b1a067e18fb35f723affa46447f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:56:20.469192   15187 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
	W0114 02:56:20.469237   15187 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
	I0114 02:56:20.469252   15187 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 02:56:20.469290   15187 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
	I0114 02:56:20.469329   15187 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
	I0114 02:56:20.469369   15187 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
	I0114 02:56:20.469447   15187 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
	I0114 02:56:20.469977   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 02:56:20.489412   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 02:56:20.506824   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 02:56:20.524239   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0114 02:56:20.541699   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 02:56:20.559178   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 02:56:20.576951   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 02:56:20.594579   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 02:56:20.611582   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 02:56:20.629456   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
	I0114 02:56:20.646894   15187 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
	I0114 02:56:20.663879   15187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 02:56:20.677087   15187 ssh_runner.go:195] Run: openssl version
	I0114 02:56:20.682805   15187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 02:56:20.691127   15187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:56:20.695128   15187 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:56:20.695190   15187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 02:56:20.700633   15187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 02:56:20.708684   15187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
	I0114 02:56:20.716773   15187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
	I0114 02:56:20.720671   15187 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 02:56:20.720727   15187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
	I0114 02:56:20.726459   15187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
	I0114 02:56:20.735189   15187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
	I0114 02:56:20.743307   15187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
	I0114 02:56:20.747292   15187 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 02:56:20.747338   15187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
	I0114 02:56:20.752835   15187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 02:56:20.760907   15187 kubeadm.go:396] StartCluster: {Name:cilium-024326 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-024326 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetric
s:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:56:20.761029   15187 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 02:56:20.784311   15187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 02:56:20.792117   15187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 02:56:20.799576   15187 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 02:56:20.799634   15187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 02:56:20.807132   15187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 02:56:20.807160   15187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 02:56:20.854295   15187 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0114 02:56:20.854339   15187 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 02:56:20.959419   15187 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 02:56:20.959508   15187 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 02:56:20.959599   15187 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 02:56:21.092711   15187 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 02:56:21.114350   15187 out.go:204]   - Generating certificates and keys ...
	I0114 02:56:21.114447   15187 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 02:56:21.114513   15187 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 02:56:21.281144   15187 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0114 02:56:21.345255   15187 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0114 02:56:21.591151   15187 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0114 02:56:21.678156   15187 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0114 02:56:21.724097   15187 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0114 02:56:21.724229   15187 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [cilium-024326 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0114 02:56:21.796867   15187 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0114 02:56:21.796988   15187 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [cilium-024326 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0114 02:56:21.858357   15187 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0114 02:56:21.955411   15187 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0114 02:56:22.065444   15187 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0114 02:56:22.065503   15187 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 02:56:22.112421   15187 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 02:56:22.420617   15187 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 02:56:22.623831   15187 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 02:56:22.778228   15187 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 02:56:22.789192   15187 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 02:56:22.789869   15187 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 02:56:22.789929   15187 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0114 02:56:22.865219   15187 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 02:56:22.888711   15187 out.go:204]   - Booting up control plane ...
	I0114 02:56:22.888822   15187 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 02:56:22.888878   15187 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 02:56:22.888978   15187 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 02:56:22.889088   15187 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 02:56:22.889256   15187 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 02:56:23.858763   15190 api_server.go:268] stopped: https://127.0.0.1:52690/healthz: Get "https://127.0.0.1:52690/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0114 02:56:23.858793   15190 retry.go:31] will retry after 381.329545ms: state is "Stopped"
	I0114 02:56:24.240221   15190 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52690/healthz ...
	I0114 02:56:24.829833   15190 api_server.go:278] https://127.0.0.1:52690/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0114 02:56:24.829857   15190 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:52690/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0114 02:56:25.252691   15190 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52690/healthz ...
	I0114 02:56:25.257902   15190 api_server.go:278] https://127.0.0.1:52690/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 02:56:25.257919   15190 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:52690/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 02:56:25.731204   15190 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52690/healthz ...
	I0114 02:56:25.736580   15190 api_server.go:278] https://127.0.0.1:52690/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 02:56:25.736604   15190 retry.go:31] will retry after 587.352751ms: https://127.0.0.1:52690/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 02:56:26.324318   15190 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52690/healthz ...
	I0114 02:56:26.330254   15190 api_server.go:278] https://127.0.0.1:52690/healthz returned 200:
	ok
	I0114 02:56:26.343067   15190 system_pods.go:86] 5 kube-system pods found
	I0114 02:56:26.343094   15190 system_pods.go:89] "etcd-kubernetes-upgrade-024716" [e43f2991-c145-4e62-8d39-dcd40aa0bc4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 02:56:26.343104   15190 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-024716" [f20b4b80-527a-488c-9068-a4fd2455eded] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0114 02:56:26.343112   15190 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-024716" [83605776-57d5-4fd9-861d-f04b48f8e799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0114 02:56:26.343119   15190 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-024716" [337ea3a4-184a-4216-a952-8213efc9ba26] Running
	I0114 02:56:26.343124   15190 system_pods.go:89] "storage-provisioner" [75a3cefa-7090-48ee-8586-1d3eadf3931a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0114 02:56:26.343131   15190 kubeadm.go:611] needs reconfigure: missing components: kube-dns, kube-proxy
	I0114 02:56:26.343148   15190 kubeadm.go:1114] stopping kube-system containers ...
	I0114 02:56:26.343253   15190 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 02:56:26.369930   15190 docker.go:444] Stopping containers: [2bb1da6e9415 c67d9bc72a2a c3861c224844 63b6bc1775ab d453cdd30854 6dd068ce0765 ac586f23d6f5 bd4e40242b4b b0298c680cf6 d8c3af3ce579 7e3b11575ada 9521bd6132b5 370ffc3e2db5 a6d5c14abfb5 9f57f4f6ac10 c4cd8b98667c 489ed9126593]
	I0114 02:56:26.370039   15190 ssh_runner.go:195] Run: docker stop 2bb1da6e9415 c67d9bc72a2a c3861c224844 63b6bc1775ab d453cdd30854 6dd068ce0765 ac586f23d6f5 bd4e40242b4b b0298c680cf6 d8c3af3ce579 7e3b11575ada 9521bd6132b5 370ffc3e2db5 a6d5c14abfb5 9f57f4f6ac10 c4cd8b98667c 489ed9126593
	I0114 02:56:27.160728   15190 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 02:56:27.255794   15190 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 02:56:27.264614   15190 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan 14 10:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 14 10:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Jan 14 10:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan 14 10:55 /etc/kubernetes/scheduler.conf
	
	I0114 02:56:27.264704   15190 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 02:56:27.273020   15190 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 02:56:27.282074   15190 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 02:56:27.289693   15190 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:56:27.289755   15190 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0114 02:56:27.297604   15190 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 02:56:27.330318   15190 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:56:27.330392   15190 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0114 02:56:27.338079   15190 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 02:56:27.347095   15190 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 02:56:27.347109   15190 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:56:27.396900   15190 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:56:27.924229   15190 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:56:28.079703   15190 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:56:28.132318   15190 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:56:28.196839   15190 api_server.go:51] waiting for apiserver process to appear ...
	I0114 02:56:28.196941   15190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:56:28.737710   15190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:56:29.238625   15190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:56:29.249905   15190 api_server.go:71] duration metric: took 1.053061612s to wait for apiserver process to appear ...
	I0114 02:56:29.249933   15190 api_server.go:87] waiting for apiserver healthz status ...
	I0114 02:56:29.249952   15190 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52690/healthz ...
	I0114 02:56:32.372573   15187 kubeadm.go:317] [apiclient] All control plane components are healthy after 9.502126 seconds
	I0114 02:56:32.372821   15187 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0114 02:56:32.383935   15187 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0114 02:56:32.901747   15187 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0114 02:56:32.901898   15187 kubeadm.go:317] [mark-control-plane] Marking the node cilium-024326 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0114 02:56:33.408292   15187 kubeadm.go:317] [bootstrap-token] Using token: c0ihxi.q075ilv07gsxyq6m
	I0114 02:56:33.447755   15187 out.go:204]   - Configuring RBAC rules ...
	I0114 02:56:33.447867   15187 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0114 02:56:33.447975   15187 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0114 02:56:33.473690   15187 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0114 02:56:33.475952   15187 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0114 02:56:33.478221   15187 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0114 02:56:33.480328   15187 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0114 02:56:33.488116   15187 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0114 02:56:33.650325   15187 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0114 02:56:33.829476   15187 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0114 02:56:33.830338   15187 kubeadm.go:317] 
	I0114 02:56:33.830436   15187 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0114 02:56:33.830448   15187 kubeadm.go:317] 
	I0114 02:56:33.830510   15187 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0114 02:56:33.830519   15187 kubeadm.go:317] 
	I0114 02:56:33.830536   15187 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0114 02:56:33.830581   15187 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0114 02:56:33.830681   15187 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0114 02:56:33.830691   15187 kubeadm.go:317] 
	I0114 02:56:33.830746   15187 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0114 02:56:33.830761   15187 kubeadm.go:317] 
	I0114 02:56:33.830824   15187 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0114 02:56:33.830831   15187 kubeadm.go:317] 
	I0114 02:56:33.830872   15187 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0114 02:56:33.830926   15187 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0114 02:56:33.830978   15187 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0114 02:56:33.830986   15187 kubeadm.go:317] 
	I0114 02:56:33.831047   15187 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0114 02:56:33.831130   15187 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0114 02:56:33.831135   15187 kubeadm.go:317] 
	I0114 02:56:33.831193   15187 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token c0ihxi.q075ilv07gsxyq6m \
	I0114 02:56:33.831274   15187 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 \
	I0114 02:56:33.831298   15187 kubeadm.go:317] 	--control-plane 
	I0114 02:56:33.831305   15187 kubeadm.go:317] 
	I0114 02:56:33.831389   15187 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0114 02:56:33.831396   15187 kubeadm.go:317] 
	I0114 02:56:33.831464   15187 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token c0ihxi.q075ilv07gsxyq6m \
	I0114 02:56:33.831541   15187 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:991724b20e9144ef62d83dea9e408b8e540f586389499c494a5169b8b5995c39 
	I0114 02:56:33.834621   15187 kubeadm.go:317] W0114 10:56:20.846849    1084 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0114 02:56:33.834776   15187 kubeadm.go:317] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0114 02:56:33.834838   15187 kubeadm.go:317] 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0114 02:56:33.834954   15187 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 02:56:33.834963   15187 cni.go:95] Creating CNI manager for "cilium"
	I0114 02:56:33.894766   15187 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I0114 02:56:32.349979   15190 api_server.go:278] https://127.0.0.1:52690/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0114 02:56:32.350002   15190 api_server.go:102] status: https://127.0.0.1:52690/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0114 02:56:32.850153   15190 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52690/healthz ...
	I0114 02:56:32.855844   15190 api_server.go:278] https://127.0.0.1:52690/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 02:56:32.855857   15190 api_server.go:102] status: https://127.0.0.1:52690/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 02:56:33.350108   15190 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52690/healthz ...
	I0114 02:56:33.355935   15190 api_server.go:278] https://127.0.0.1:52690/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 02:56:33.355951   15190 api_server.go:102] status: https://127.0.0.1:52690/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 02:56:33.850128   15190 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52690/healthz ...
	I0114 02:56:33.856503   15190 api_server.go:278] https://127.0.0.1:52690/healthz returned 200:
	ok
	I0114 02:56:33.863761   15190 api_server.go:140] control plane version: v1.25.3
	I0114 02:56:33.863776   15190 api_server.go:130] duration metric: took 4.61379265s to wait for apiserver health ...
	I0114 02:56:33.863784   15190 cni.go:95] Creating CNI manager for ""
	I0114 02:56:33.863788   15190 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 02:56:33.863793   15190 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 02:56:33.869726   15190 system_pods.go:59] 5 kube-system pods found
	I0114 02:56:33.869742   15190 system_pods.go:61] "etcd-kubernetes-upgrade-024716" [e43f2991-c145-4e62-8d39-dcd40aa0bc4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 02:56:33.869748   15190 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-024716" [f20b4b80-527a-488c-9068-a4fd2455eded] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0114 02:56:33.869753   15190 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-024716" [83605776-57d5-4fd9-861d-f04b48f8e799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0114 02:56:33.869758   15190 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-024716" [337ea3a4-184a-4216-a952-8213efc9ba26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0114 02:56:33.869763   15190 system_pods.go:61] "storage-provisioner" [75a3cefa-7090-48ee-8586-1d3eadf3931a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0114 02:56:33.869769   15190 system_pods.go:74] duration metric: took 5.971655ms to wait for pod list to return data ...
	I0114 02:56:33.869774   15190 node_conditions.go:102] verifying NodePressure condition ...
	I0114 02:56:33.872933   15190 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:56:33.872950   15190 node_conditions.go:123] node cpu capacity is 6
	I0114 02:56:33.872962   15190 node_conditions.go:105] duration metric: took 3.18399ms to run NodePressure ...
	I0114 02:56:33.872975   15190 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 02:56:34.012745   15190 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 02:56:34.020693   15190 ops.go:34] apiserver oom_adj: -16
	I0114 02:56:34.020706   15190 kubeadm.go:631] restartCluster took 20.567864938s
	I0114 02:56:34.020714   15190 kubeadm.go:398] StartCluster complete in 20.622114596s
	I0114 02:56:34.020728   15190 settings.go:142] acquiring lock: {Name:mka95467446367990e489ec54b84107091d6186f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:56:34.020831   15190 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:56:34.021373   15190 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/kubeconfig: {Name:mkb6d1db5780815291441dc67b348461b9325651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 02:56:34.022197   15190 kapi.go:59] client config for kubernetes-upgrade-024716: &rest.Config{Host:"https://127.0.0.1:52690", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 02:56:34.025192   15190 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-024716" rescaled to 1
	I0114 02:56:34.025229   15190 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 02:56:34.025243   15190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 02:56:34.025268   15190 addons.go:486] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0114 02:56:34.025392   15190 config.go:180] Loaded profile config "kubernetes-upgrade-024716": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:56:34.047050   15190 out.go:177] * Verifying Kubernetes components...
	I0114 02:56:34.047246   15190 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-024716"
	I0114 02:56:34.047249   15190 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-024716"
	I0114 02:56:34.083883   15190 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-024716"
	I0114 02:56:34.083902   15190 addons.go:227] Setting addon storage-provisioner=true in "kubernetes-upgrade-024716"
	I0114 02:56:34.083905   15190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W0114 02:56:34.083919   15190 addons.go:236] addon storage-provisioner should already be in state true
	I0114 02:56:34.084030   15190 host.go:66] Checking if "kubernetes-upgrade-024716" exists ...
	I0114 02:56:34.084251   15190 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-024716 --format={{.State.Status}}
	I0114 02:56:34.084410   15190 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-024716 --format={{.State.Status}}
	I0114 02:56:34.094679   15190 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0114 02:56:34.102081   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:34.164507   15190 kapi.go:59] client config for kubernetes-upgrade-024716: &rest.Config{Host:"https://127.0.0.1:52690", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubernetes-upgrade-024716/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 02:56:34.185981   15190 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 02:56:33.916016   15187 ssh_runner.go:195] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0114 02:56:33.950228   15187 cilium.go:832] Using pod CIDR: 10.244.0.0/16
	I0114 02:56:33.950245   15187 cilium.go:843] cilium options: {PodSubnet:10.244.0.0/16}
	I0114 02:56:33.950292   15187 cilium.go:847] cilium config:
	---
	# Source: cilium/templates/cilium-agent-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-configmap.yaml
	apiVersion: v1
	kind: ConfigMap
	metadata:
	  name: cilium-config
	  namespace: kube-system
	data:
	
	  # Identity allocation mode selects how identities are shared between cilium
	  # nodes by setting how they are stored. The options are "crd" or "kvstore".
	  # - "crd" stores identities in kubernetes as CRDs (custom resource definition).
	  #   These can be queried with:
	  #     kubectl get ciliumid
	  # - "kvstore" stores identities in a kvstore, etcd or consul, that is
	  #   configured below. Cilium versions before 1.6 supported only the kvstore
	  #   backend. Upgrades from these older cilium versions should continue using
	  #   the kvstore by commenting out the identity-allocation-mode below, or
	  #   setting it to "kvstore".
	  identity-allocation-mode: crd
	  cilium-endpoint-gc-interval: "5m0s"
	
	  # If you want to run cilium in debug mode change this value to true
	  debug: "false"
	  # The agent can be put into the following three policy enforcement modes
	  # default, always and never.
	  # https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
	  enable-policy: "default"
	
	  # Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
	  # address.
	  enable-ipv4: "true"
	
	  # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
	  # address.
	  enable-ipv6: "false"
	  # Users who wish to specify their own custom CNI configuration file must set
	  # custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
	  custom-cni-conf: "false"
	  enable-bpf-clock-probe: "true"
	  # If you want cilium monitor to aggregate tracing for packets, set this level
	  # to "low", "medium", or "maximum". The higher the level, the less packets
	  # that will be seen in monitor output.
	  monitor-aggregation: medium
	
	  # The monitor aggregation interval governs the typical time between monitor
	  # notification events for each allowed connection.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-interval: 5s
	
	  # The monitor aggregation flags determine which TCP flags which, upon the
	  # first observation, cause monitor notifications to be generated.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-flags: all
	  # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
	  # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
	  bpf-map-dynamic-size-ratio: "0.0025"
	  # bpf-policy-map-max specifies the maximum number of entries in endpoint
	  # policy map (per endpoint)
	  bpf-policy-map-max: "16384"
	  # bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
	  # backend and affinity maps.
	  bpf-lb-map-max: "65536"
	  # Pre-allocation of map entries allows per-packet latency to be reduced, at
	  # the expense of up-front memory allocation for the entries in the maps. The
	  # default value below will minimize memory usage in the default installation;
	  # users who are sensitive to latency may consider setting this to "true".
	  #
	  # This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
	  # this option and behave as though it is set to "true".
	  #
	  # If this value is modified, then during the next Cilium startup the restore
	  # of existing endpoints and tracking of ongoing connections may be disrupted.
	  # As a result, reply packets may be dropped and the load-balancing decisions
	  # for established connections may change.
	  #
	  # If this option is set to "false" during an upgrade from 1.3 or earlier to
	  # 1.4 or later, then it may cause one-time disruptions during the upgrade.
	  preallocate-bpf-maps: "false"
	
	  # Regular expression matching compatible Istio sidecar istio-proxy
	  # container image names
	  sidecar-istio-proxy-image: "cilium/istio_proxy"
	
	  # Name of the cluster. Only relevant when building a mesh of clusters.
	  cluster-name: cluster
	  # Unique ID of the cluster. Must be unique across all conneted clusters and
	  # in the range of 1 and 255. Only relevant when building a mesh of clusters.
	  cluster-id: "1"
	
	  # Encapsulation mode for communication between nodes
	  # Possible values:
	  #   - disabled
	  #   - vxlan (default)
	  #   - geneve
	  tunnel: vxlan
	  # Enables L7 proxy for L7 policy enforcement and visibility
	  enable-l7-proxy: "true"
	
	  # wait-bpf-mount makes init container wait until bpf filesystem is mounted
	  wait-bpf-mount: "false"
	
	  masquerade: "true"
	  enable-bpf-masquerade: "true"
	
	  enable-xt-socket-fallback: "true"
	  install-iptables-rules: "true"
	
	  auto-direct-node-routes: "false"
	  enable-bandwidth-manager: "false"
	  enable-local-redirect-policy: "false"
	  kube-proxy-replacement:  "probe"
	  kube-proxy-replacement-healthz-bind-address: ""
	  enable-health-check-nodeport: "true"
	  node-port-bind-protection: "true"
	  enable-auto-protect-node-port-range: "true"
	  enable-session-affinity: "true"
	  k8s-require-ipv4-pod-cidr: "true"
	  k8s-require-ipv6-pod-cidr: "false"
	  enable-endpoint-health-checking: "true"
	  enable-health-checking: "true"
	  enable-well-known-identities: "false"
	  enable-remote-node-identity: "true"
	  operator-api-serve-addr: "127.0.0.1:9234"
	  # Enable Hubble gRPC service.
	  enable-hubble: "true"
	  # UNIX domain socket for Hubble server to listen to.
	  hubble-socket-path:  "/var/run/cilium/hubble.sock"
	  # An additional address for Hubble server to listen to (e.g. ":4244").
	  hubble-listen-address: ":4244"
	  hubble-disable-tls: "false"
	  hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
	  hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
	  hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
	  ipam: "cluster-pool"
	  cluster-pool-ipv4-cidr: "10.244.0.0/16"
	  cluster-pool-ipv4-mask-size: "24"
	  disable-cnp-status-updates: "true"
	  cgroup-root: "/run/cilium/cgroupv2"
	---
	# Source: cilium/templates/cilium-agent-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium
	rules:
	- apiGroups:
	  - networking.k8s.io
	  resources:
	  - networkpolicies
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - namespaces
	  - services
	  - nodes
	  - endpoints
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - pods
	  - pods/finalizers
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	  - delete
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  # Deprecated for removal in v1.10
	  - create
	  - list
	  - watch
	  - update
	
	  # This is used when validating policies in preflight. This will need to stay
	  # until we figure out how to avoid "get" inside the preflight, and then
	  # should be removed ideally.
	  - get
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	---
	# Source: cilium/templates/cilium-operator-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium-operator
	rules:
	- apiGroups:
	  - ""
	  resources:
	  # to automatically delete [core|kube]dns pods so that are starting to being
	  # managed by Cilium
	  - pods
	  verbs:
	  - get
	  - list
	  - watch
	  - delete
	  - apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # To remove node taints
	  - nodes
	  # To set NetworkUnavailable false on startup
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # to perform the translation of a CNP that contains 'ToGroup' to its endpoints
	  - services
	  - endpoints
	  # to check apiserver connectivity
	  - namespaces
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/status
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  - create
	  - get
	  - list
	  - update
	  - watch
	# For cilium-operator running in HA mode.
	#
	# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
	# between multiple running instances.
	# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
	# common and fewer objects in the cluster watch "all Leases".
	# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
	# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
	# that we only authorize access to leases resources in supported K8s versions.
	- apiGroups:
	  - coordination.k8s.io
	  resources:
	  - leases
	  verbs:
	  - create
	  - get
	  - update
	---
	# Source: cilium/templates/cilium-agent-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium
	subjects:
	- kind: ServiceAccount
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium-operator
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium-operator
	subjects:
	- kind: ServiceAccount
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-agent-daemonset.yaml
	apiVersion: apps/v1
	kind: DaemonSet
	metadata:
	  labels:
	    k8s-app: cilium
	  name: cilium
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: cilium
	  updateStrategy:
	    rollingUpdate:
	      maxUnavailable: 2
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	        # This annotation plus the CriticalAddonsOnly toleration makes
	        # cilium to be a critical pod in the cluster, which ensures cilium
	        # gets priority scheduling.
	        # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
	        scheduler.alpha.kubernetes.io/critical-pod: ""
	      labels:
	        k8s-app: cilium
	    spec:
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: k8s-app
	                operator: In
	                values:
	                - cilium
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        command:
	        - cilium-agent
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9879
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 10
	          # The initial delay for the liveness probe is intentionally large to
	          # avoid an endless kill & restart cycle if in the event that the initial
	          # bootstrapping takes longer than expected.
	          initialDelaySeconds: 120
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        readinessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9879
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 3
	          initialDelaySeconds: 5
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_FLANNEL_MASTER_DEVICE
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-master-device
	              name: cilium-config
	              optional: true
	        - name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-uninstall-on-exit
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CLUSTERMESH_CONFIG
	          value: /var/lib/cilium/clustermesh/
	        - name: CILIUM_CNI_CHAINING_MODE
	          valueFrom:
	            configMapKeyRef:
	              key: cni-chaining-mode
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CUSTOM_CNI_CONF
	          valueFrom:
	            configMapKeyRef:
	              key: custom-cni-conf
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.12.3@sha256:30de50c4dc0a1e1077e9e7917a54d5cab253058b3f779822aec00f5c817ca826"
	        imagePullPolicy: IfNotPresent
	        lifecycle:
	          postStart:
	            exec:
	              command:
	              - "/cni-install.sh"
	              - "--enable-debug=false"
	          preStop:
	            exec:
	              command:
	              - /cni-uninstall.sh
	        name: cilium-agent
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	            - SYS_MODULE
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        - mountPath: /host/opt/cni/bin
	          name: cni-path
	        - mountPath: /host/etc/cni/net.d
	          name: etc-cni-netd
	        - mountPath: /var/lib/cilium/clustermesh
	          name: clustermesh-secrets
	          readOnly: true
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	          # Needed to be able to load kernel modules
	        - mountPath: /lib/modules
	          name: lib-modules
	          readOnly: true
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	        - mountPath: /var/lib/cilium/tls/hubble
	          name: hubble-tls
	          readOnly: true
	      hostNetwork: true
	      initContainers:
	      # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
	      # We use nsenter command with host's cgroup and mount namespaces enabled.
	      - name: mount-cgroup
	        env:
	          - name: CGROUP_ROOT
	            value: /run/cilium/cgroupv2
	          - name: BIN_PATH
	            value: /opt/cni/bin
	        command:
	          - sh
	          - -c
	          # The statically linked Go program binary is invoked to avoid any
	          # dependency on utilities like sh and mount that can be missing on certain
	          # distros installed on the underlying host. Copy the binary to the
	          # same directory where we install cilium cni plugin so that exec permissions
	          # are available.
	          - 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
	        image: "quay.io/cilium/cilium:v1.12.3@sha256:30de50c4dc0a1e1077e9e7917a54d5cab253058b3f779822aec00f5c817ca826"
	        imagePullPolicy: IfNotPresent
	        volumeMounts:
	          - mountPath: /hostproc
	            name: hostproc
	          - mountPath: /hostbin
	            name: cni-path
	        securityContext:
	          privileged: true
	      - command:
	        - /init-container.sh
	        env:
	        - name: CILIUM_ALL_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_BPF_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-bpf-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_WAIT_BPF_MOUNT
	          valueFrom:
	            configMapKeyRef:
	              key: wait-bpf-mount
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.12.3@sha256:30de50c4dc0a1e1077e9e7917a54d5cab253058b3f779822aec00f5c817ca826"
	        imagePullPolicy: IfNotPresent
	        name: clean-cilium-state
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	          mountPropagation: HostToContainer
	          # Required to mount cgroup filesystem from the host to cilium agent pod
	        - mountPath: /run/cilium/cgroupv2
	          name: cilium-cgroup
	          mountPropagation: HostToContainer
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        resources:
	          requests:
	            cpu: 100m
	            memory: 100Mi
	      restartPolicy: Always
	      priorityClassName: system-node-critical
	      serviceAccount: cilium
	      serviceAccountName: cilium
	      terminationGracePeriodSeconds: 1
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To keep state between restarts / upgrades
	      - hostPath:
	          path: /var/run/cilium
	          type: DirectoryOrCreate
	        name: cilium-run
	        # To keep state between restarts / upgrades for bpf maps
	      - hostPath:
	          path: /sys/fs/bpf
	          type: DirectoryOrCreate
	        name: bpf-maps
	      # To mount cgroup2 filesystem on the host
	      - hostPath:
	          path: /proc
	          type: Directory
	        name: hostproc
	      # To keep state between restarts / upgrades for cgroup2 filesystem
	      - hostPath:
	          path: /run/cilium/cgroupv2
	          type: DirectoryOrCreate
	        name: cilium-cgroup
	      # To install cilium cni plugin in the host
	      - hostPath:
	          path:  /opt/cni/bin
	          type: DirectoryOrCreate
	        name: cni-path
	        # To install cilium cni configuration in the host
	      - hostPath:
	          path: /etc/cni/net.d
	          type: DirectoryOrCreate
	        name: etc-cni-netd
	        # To be able to load kernel modules
	      - hostPath:
	          path: /lib/modules
	        name: lib-modules
	        # To access iptables concurrently with other processes (e.g. kube-proxy)
	      - hostPath:
	          path: /run/xtables.lock
	          type: FileOrCreate
	        name: xtables-lock
	        # To read the clustermesh configuration
	      - name: clustermesh-secrets
	        secret:
	          defaultMode: 420
	          optional: true
	          secretName: cilium-clustermesh
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	      - name: hubble-tls
	        projected:
	          sources:
	          - secret:
	              name: hubble-server-certs
	              items:
	                - key: tls.crt
	                  path: server.crt
	                - key: tls.key
	                  path: server.key
	              optional: true
	          - configMap:
	              name: hubble-ca-cert
	              items:
	                - key: ca.crt
	                  path: client-ca.crt
	              optional: true
	---
	# Source: cilium/templates/cilium-operator-deployment.yaml
	apiVersion: apps/v1
	kind: Deployment
	metadata:
	  labels:
	    io.cilium/app: operator
	    name: cilium-operator
	  name: cilium-operator
	  namespace: kube-system
	spec:
	  # We support HA mode only for Kubernetes version > 1.14
	  # See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
	  # for more details.
	  replicas: 1
	  selector:
	    matchLabels:
	      io.cilium/app: operator
	      name: cilium-operator
	  strategy:
	    rollingUpdate:
	      maxSurge: 1
	      maxUnavailable: 1
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	      labels:
	        io.cilium/app: operator
	        name: cilium-operator
	    spec:
	      # In HA mode, cilium-operator pods must not be scheduled on the same
	      # node as they will clash with each other.
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: io.cilium/app
	                operator: In
	                values:
	                - operator
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        - --debug=$(CILIUM_DEBUG)
	        command:
	        - cilium-operator-generic
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_DEBUG
	          valueFrom:
	            configMapKeyRef:
	              key: debug
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/operator-generic:v1.12.3@sha256:816ec1da586139b595eeb31932c61a7c13b07fb4a0255341c0e0f18608e84eff"
	        imagePullPolicy: IfNotPresent
	        name: cilium-operator
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9234
	            scheme: HTTP
	          initialDelaySeconds: 60
	          periodSeconds: 10
	          timeoutSeconds: 3
	        volumeMounts:
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	      hostNetwork: true
	      restartPolicy: Always
	      priorityClassName: system-cluster-critical
	      serviceAccount: cilium-operator
	      serviceAccountName: cilium-operator
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	
	I0114 02:56:33.950375   15187 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I0114 02:56:33.950387   15187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (23434 bytes)
	I0114 02:56:33.970599   15187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0114 02:56:34.655977   15187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 02:56:34.656093   15187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 02:56:34.656098   15187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81 minikube.k8s.io/name=cilium-024326 minikube.k8s.io/updated_at=2023_01_14T02_56_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 02:56:34.760236   15187 ops.go:34] apiserver oom_adj: -16
	I0114 02:56:34.760334   15187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0114 02:56:34.222245   15190 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 02:56:34.222271   15190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 02:56:34.222397   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:34.229845   15190 addons.go:227] Setting addon default-storageclass=true in "kubernetes-upgrade-024716"
	W0114 02:56:34.229864   15190 addons.go:236] addon default-storageclass should already be in state true
	I0114 02:56:34.229904   15190 host.go:66] Checking if "kubernetes-upgrade-024716" exists ...
	I0114 02:56:34.230438   15190 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-024716 --format={{.State.Status}}
	I0114 02:56:34.233580   15190 api_server.go:51] waiting for apiserver process to appear ...
	I0114 02:56:34.233654   15190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:56:34.246012   15190 api_server.go:71] duration metric: took 220.758722ms to wait for apiserver process to appear ...
	I0114 02:56:34.246033   15190 api_server.go:87] waiting for apiserver healthz status ...
	I0114 02:56:34.246052   15190 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52690/healthz ...
	I0114 02:56:34.253736   15190 api_server.go:278] https://127.0.0.1:52690/healthz returned 200:
	ok
	I0114 02:56:34.256116   15190 api_server.go:140] control plane version: v1.25.3
	I0114 02:56:34.256131   15190 api_server.go:130] duration metric: took 10.092574ms to wait for apiserver health ...
	I0114 02:56:34.256137   15190 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 02:56:34.261772   15190 system_pods.go:59] 5 kube-system pods found
	I0114 02:56:34.261794   15190 system_pods.go:61] "etcd-kubernetes-upgrade-024716" [e43f2991-c145-4e62-8d39-dcd40aa0bc4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 02:56:34.261804   15190 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-024716" [f20b4b80-527a-488c-9068-a4fd2455eded] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0114 02:56:34.261815   15190 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-024716" [83605776-57d5-4fd9-861d-f04b48f8e799] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0114 02:56:34.261821   15190 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-024716" [337ea3a4-184a-4216-a952-8213efc9ba26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0114 02:56:34.261827   15190 system_pods.go:61] "storage-provisioner" [75a3cefa-7090-48ee-8586-1d3eadf3931a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0114 02:56:34.261832   15190 system_pods.go:74] duration metric: took 5.691158ms to wait for pod list to return data ...
	I0114 02:56:34.261844   15190 kubeadm.go:573] duration metric: took 236.596052ms to wait for : map[apiserver:true system_pods:true] ...
	I0114 02:56:34.261858   15190 node_conditions.go:102] verifying NodePressure condition ...
	I0114 02:56:34.265739   15190 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 02:56:34.265754   15190 node_conditions.go:123] node cpu capacity is 6
	I0114 02:56:34.265764   15190 node_conditions.go:105] duration metric: took 3.900381ms to run NodePressure ...
	I0114 02:56:34.265772   15190 start.go:217] waiting for startup goroutines ...
	I0114 02:56:34.301610   15190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52691 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa Username:docker}
	I0114 02:56:34.307469   15190 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 02:56:34.307485   15190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 02:56:34.307575   15190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-024716
	I0114 02:56:34.371677   15190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52691 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/kubernetes-upgrade-024716/id_rsa Username:docker}
	I0114 02:56:34.400778   15190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 02:56:34.481830   15190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 02:56:35.128069   15190 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0114 02:56:35.148869   15190 addons.go:488] enableAddons completed in 1.123569346s
	I0114 02:56:35.149743   15190 ssh_runner.go:195] Run: rm -f paused
	I0114 02:56:35.189265   15190 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I0114 02:56:35.232039   15190 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-024716" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-14 10:51:29 UTC, end at Sat 2023-01-14 10:56:36 UTC. --
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.002321798Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.023214751Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.032350353Z" level=info msg="Loading containers: start."
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.125058696Z" level=info msg="ignoring event" container=d8c3af3ce5793aa1a42ad3a26ac3c31ba2c4605a3bb75a9d084af3dd79dae04b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.126229645Z" level=info msg="ignoring event" container=7e3b11575adab36842f2916e5f2bfaf8bea4d64f3b055f992aa05aa3019b0c09 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.129844288Z" level=info msg="ignoring event" container=b0298c680cf61458caea20f0a7aa19263a105cf1e0ef33439651a6e5665cb8e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.145317574Z" level=info msg="ignoring event" container=bd4e40242b4b41bdff538a9b2ae4d149e561b5c387aa0ecce809eb2960b4337c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.359535907Z" level=info msg="Removing stale sandbox 0a3f822358ea2bae92218655cbcdcbf84119e25689dbeb1e833932c913614856 (7e3b11575adab36842f2916e5f2bfaf8bea4d64f3b055f992aa05aa3019b0c09)"
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.361751629Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint a50e4ecf8e40109a0f919e80d9ef2f40358a2920e494c87bc981f8b19e5c2ba9 9f3909d0734f4fcc50cfd81654afb6ff70fa342951cf9073260ddba650797a3b], retrying...."
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.474742029Z" level=info msg="Removing stale sandbox 53f7ddda353861b404b90204ce839ad79195b020ac9d6e56993a7f363087b822 (d8c3af3ce5793aa1a42ad3a26ac3c31ba2c4605a3bb75a9d084af3dd79dae04b)"
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.476466071Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint a50e4ecf8e40109a0f919e80d9ef2f40358a2920e494c87bc981f8b19e5c2ba9 39bca07496b4c3b0a3352321ecf27bc334861f0d21c6326ab04c3ce4b9d4161f], retrying...."
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.513716400Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.563229146Z" level=info msg="Loading containers: done."
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.622587125Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.622669079Z" level=info msg="Daemon has completed initialization"
	Jan 14 10:56:11 kubernetes-upgrade-024716 systemd[1]: Started Docker Application Container Engine.
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.649146649Z" level=info msg="API listen on [::]:2376"
	Jan 14 10:56:11 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:11.652664497Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 14 10:56:26 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:26.444884306Z" level=info msg="ignoring event" container=63b6bc1775ab00bb77ec843519fcf8a93445618871360684dae13ba756a2b9b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:56:26 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:26.446697284Z" level=info msg="ignoring event" container=6dd068ce076596a595cdf40619cb94b08ea0895f831e9ec5ffe83646daecea5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:56:26 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:26.450717373Z" level=info msg="ignoring event" container=ac586f23d6f55a7c3e7913a49e7e212e8096452fe5fc47d735b3fb9829855a29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:56:26 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:26.452280467Z" level=info msg="ignoring event" container=d453cdd30854c9d786f70f7344a8201de819c896d45762b4e24a5421197d8adf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:56:26 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:26.463257844Z" level=info msg="ignoring event" container=c67d9bc72a2a06fb590ff7c4b1dd4a5f4b9abf9f4248a3df44535d7c6f8154df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:56:26 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:26.531155034Z" level=info msg="ignoring event" container=2bb1da6e9415737b3312fb3788920590cdc658f60bfecd0630d92f666fc89c41 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:56:27 kubernetes-upgrade-024716 dockerd[11960]: time="2023-01-14T10:56:27.135066623Z" level=info msg="ignoring event" container=c3861c224844bd19263b3f934b14151c150df77417f9dced23d322f907724668 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	eeb99c91a5ca3       6d23ec0e8b87e       8 seconds ago       Running             kube-scheduler            2                   dffb03eaa0d0c
	dadbc344f5b5b       6039992312758       8 seconds ago       Running             kube-controller-manager   2                   1ad199ad93f62
	db43930c9d2c9       0346dbd74bcb9       8 seconds ago       Running             kube-apiserver            2                   eb356064d1e25
	e4a13fe120d04       a8a176a5d5d69       8 seconds ago       Running             etcd                      3                   988c9f5b366f7
	2bb1da6e94157       a8a176a5d5d69       15 seconds ago      Exited              etcd                      2                   ac586f23d6f55
	c67d9bc72a2a0       6d23ec0e8b87e       24 seconds ago      Exited              kube-scheduler            1                   d453cdd30854c
	c3861c224844b       0346dbd74bcb9       24 seconds ago      Exited              kube-apiserver            1                   6dd068ce07659
	bd4e40242b4b4       6039992312758       27 seconds ago      Exited              kube-controller-manager   1                   7e3b11575adab
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-024716
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-024716
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
	                    minikube.k8s.io/name=kubernetes-upgrade-024716
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_14T02_56_02_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:55:59 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-024716
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 10:56:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 10:56:32 +0000   Sat, 14 Jan 2023 10:55:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 10:56:32 +0000   Sat, 14 Jan 2023 10:55:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 10:56:32 +0000   Sat, 14 Jan 2023 10:55:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 10:56:32 +0000   Sat, 14 Jan 2023 10:56:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-024716
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc065f8e2d1f42529ccfe18f8b887c8c
	  System UUID:                dc065f8e2d1f42529ccfe18f8b887c8c
	  Boot ID:                    1fa391b2-9843-4b7f-ae34-c4015ac7f4a2
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-024716                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         35s
	  kube-system                 kube-apiserver-kubernetes-upgrade-024716             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-024716    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-scheduler-kubernetes-upgrade-024716             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 35s              kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  35s              kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  35s              kubelet  Node kubernetes-upgrade-024716 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s              kubelet  Node kubernetes-upgrade-024716 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s              kubelet  Node kubernetes-upgrade-024716 status is now: NodeHasSufficientPID
	  Normal  NodeReady                35s              kubelet  Node kubernetes-upgrade-024716 status is now: NodeReady
	  Normal  Starting                 9s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x7 over 9s)  kubelet  Node kubernetes-upgrade-024716 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x6 over 9s)  kubelet  Node kubernetes-upgrade-024716 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x6 over 9s)  kubelet  Node kubernetes-upgrade-024716 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s               kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000061] FS-Cache: O-key=[8] '69586c0400000000'
	[  +0.000051] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000052] FS-Cache: N-cookie d=000000004c8f7214{9p.inode} n=0000000037884123
	[  +0.000056] FS-Cache: N-key=[8] '69586c0400000000'
	[  +0.001452] FS-Cache: Duplicate cookie detected
	[  +0.000048] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000050] FS-Cache: O-cookie d=000000004c8f7214{9p.inode} n=00000000e96425fb
	[  +0.000055] FS-Cache: O-key=[8] '69586c0400000000'
	[  +0.000048] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000052] FS-Cache: N-cookie d=000000004c8f7214{9p.inode} n=000000002c8b6de5
	[  +0.000065] FS-Cache: N-key=[8] '69586c0400000000'
	[  +2.938998] FS-Cache: Duplicate cookie detected
	[  +0.000053] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000063] FS-Cache: O-cookie d=000000004c8f7214{9p.inode} n=00000000cf33e52e
	[  +0.000053] FS-Cache: O-key=[8] '68586c0400000000'
	[  +0.000044] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000062] FS-Cache: N-cookie d=000000004c8f7214{9p.inode} n=0000000028ebb5ff
	[  +0.000039] FS-Cache: N-key=[8] '68586c0400000000'
	[  +0.399425] FS-Cache: Duplicate cookie detected
	[  +0.000077] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000095] FS-Cache: O-cookie d=000000004c8f7214{9p.inode} n=000000005fb5370a
	[  +0.000107] FS-Cache: O-key=[8] '71586c0400000000'
	[  +0.000082] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000073] FS-Cache: N-cookie d=000000004c8f7214{9p.inode} n=000000006daca0f3
	[  +0.000104] FS-Cache: N-key=[8] '71586c0400000000'
	
	* 
	* ==> etcd [2bb1da6e9415] <==
	* {"level":"info","ts":"2023-01-14T10:56:21.410Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-14T10:56:21.410Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-14T10:56:21.410Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T10:56:23.002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-01-14T10:56:23.002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-01-14T10:56:23.002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-01-14T10:56:23.002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-01-14T10:56:23.002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-14T10:56:23.002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-01-14T10:56:23.002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-14T10:56:23.004Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-024716 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:56:23.004Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:56:23.004Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:56:23.004Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:56:23.005Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T10:56:23.005Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-14T10:56:23.005Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-01-14T10:56:26.401Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-01-14T10:56:26.402Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"kubernetes-upgrade-024716","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2023/01/14 10:56:26 [core] grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 192.168.76.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2023/01/14 10:56:26 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2023-01-14T10:56:26.409Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-01-14T10:56:26.425Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-14T10:56:26.427Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-14T10:56:26.427Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"kubernetes-upgrade-024716","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [e4a13fe120d0] <==
	* {"level":"info","ts":"2023-01-14T10:56:28.967Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-01-14T10:56:28.968Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-01-14T10:56:28.970Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-01-14T10:56:28.970Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-01-14T10:56:28.970Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:56:28.970Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:56:28.971Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-14T10:56:28.971Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-14T10:56:28.971Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T10:56:28.971Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-14T10:56:28.971Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-14T10:56:30.746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-01-14T10:56:30.746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-01-14T10:56:30.746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-14T10:56:30.746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-01-14T10:56:30.746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-01-14T10:56:30.746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-01-14T10:56:30.746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-01-14T10:56:30.746Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-024716 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:56:30.746Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:56:30.746Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:56:30.746Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:56:30.747Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-14T10:56:30.748Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-01-14T10:56:30.746Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  10:56:37 up 55 min,  0 users,  load average: 1.80, 1.61, 1.28
	Linux kubernetes-upgrade-024716 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [c3861c224844] <==
	* W0114 10:56:26.408075       1 logging.go:59] [core] [Channel #57 SubChannel #58] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0114 10:56:26.408107       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0114 10:56:26.408176       1 logging.go:59] [core] [Channel #108 SubChannel #109] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	I0114 10:56:26.429358       1 controller.go:211] Shutting down kubernetes service endpoint reconciler
	
	* 
	* ==> kube-apiserver [db43930c9d2c] <==
	* I0114 10:56:32.331883       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0114 10:56:32.331890       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I0114 10:56:32.331928       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0114 10:56:32.333121       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 10:56:32.336760       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0114 10:56:32.336797       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0114 10:56:32.347635       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0114 10:56:32.347755       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0114 10:56:32.356677       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0114 10:56:32.442443       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0114 10:56:32.467116       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0114 10:56:32.531059       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0114 10:56:32.531072       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0114 10:56:32.531084       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0114 10:56:32.532973       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0114 10:56:32.533022       1 cache.go:39] Caches are synced for autoregister controller
	I0114 10:56:32.536502       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0114 10:56:32.537317       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0114 10:56:33.151407       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0114 10:56:33.333950       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0114 10:56:33.956224       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0114 10:56:33.965421       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0114 10:56:33.985350       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0114 10:56:34.000603       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0114 10:56:34.005537       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [bd4e40242b4b] <==
	* I0114 10:56:10.337337       1 serving.go:348] Generated self-signed cert in-memory
	I0114 10:56:10.794066       1 controllermanager.go:178] Version: v1.25.3
	I0114 10:56:10.794095       1 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:56:10.795059       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0114 10:56:10.795106       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0114 10:56:10.795342       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0114 10:56:10.795509       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [dadbc344f5b5] <==
	* I0114 10:56:35.886147       1 garbagecollector.go:154] Starting garbage collector controller
	I0114 10:56:35.886162       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0114 10:56:35.886304       1 graph_builder.go:291] GraphBuilder running
	I0114 10:56:35.936367       1 controllermanager.go:603] Started "deployment"
	I0114 10:56:35.936493       1 deployment_controller.go:160] "Starting controller" controller="deployment"
	I0114 10:56:35.936502       1 shared_informer.go:255] Waiting for caches to sync for deployment
	I0114 10:56:36.035266       1 controllermanager.go:603] Started "persistentvolume-expander"
	I0114 10:56:36.035296       1 expand_controller.go:340] Starting expand controller
	I0114 10:56:36.035309       1 shared_informer.go:255] Waiting for caches to sync for expand
	I0114 10:56:36.236525       1 controllermanager.go:603] Started "replicationcontroller"
	I0114 10:56:36.236576       1 replica_set.go:205] Starting replicationcontroller controller
	I0114 10:56:36.236584       1 shared_informer.go:255] Waiting for caches to sync for ReplicationController
	I0114 10:56:36.386049       1 controllermanager.go:603] Started "ttl"
	I0114 10:56:36.386161       1 ttl_controller.go:120] Starting TTL controller
	I0114 10:56:36.386224       1 shared_informer.go:255] Waiting for caches to sync for TTL
	I0114 10:56:36.436011       1 controllermanager.go:603] Started "root-ca-cert-publisher"
	I0114 10:56:36.436074       1 publisher.go:107] Starting root CA certificate configmap publisher
	I0114 10:56:36.436080       1 shared_informer.go:255] Waiting for caches to sync for crt configmap
	I0114 10:56:36.486105       1 controllermanager.go:603] Started "ephemeral-volume"
	I0114 10:56:36.486203       1 controller.go:169] Starting ephemeral volume controller
	I0114 10:56:36.486210       1 shared_informer.go:255] Waiting for caches to sync for ephemeral
	I0114 10:56:36.636513       1 controllermanager.go:603] Started "job"
	I0114 10:56:36.636591       1 job_controller.go:196] Starting job controller
	I0114 10:56:36.636596       1 shared_informer.go:255] Waiting for caches to sync for job
	I0114 10:56:36.685355       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [c67d9bc72a2a] <==
	* I0114 10:56:13.631464       1 serving.go:348] Generated self-signed cert in-memory
	W0114 10:56:24.111821       1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0114 10:56:24.111870       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0114 10:56:24.111876       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0114 10:56:24.847620       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0114 10:56:24.847757       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:56:24.849419       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0114 10:56:24.849552       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 10:56:24.849556       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0114 10:56:24.849573       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 10:56:24.950639       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0114 10:56:26.427509       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	I0114 10:56:26.427509       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0114 10:56:26.428111       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0114 10:56:26.428504       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [eeb99c91a5ca] <==
	* W0114 10:56:32.437031       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0114 10:56:32.437916       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0114 10:56:32.437056       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0114 10:56:32.437946       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0114 10:56:32.437159       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0114 10:56:32.437976       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0114 10:56:32.437227       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0114 10:56:32.438054       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0114 10:56:32.437315       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0114 10:56:32.438079       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0114 10:56:32.437327       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0114 10:56:32.438102       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0114 10:56:32.437414       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0114 10:56:32.438240       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0114 10:56:32.437409       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0114 10:56:32.438275       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0114 10:56:32.437489       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0114 10:56:32.438419       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0114 10:56:32.437511       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0114 10:56:32.438641       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0114 10:56:32.437581       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0114 10:56:32.438844       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0114 10:56:32.437793       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E0114 10:56:32.438964       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	I0114 10:56:33.882843       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-14 10:51:29 UTC, end at Sat 2023-01-14 10:56:38 UTC. --
	Jan 14 10:56:30 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:30.332223   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:30 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:30.432616   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:30 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:30.533154   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:30 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:30.634277   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:30 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:30.735414   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:30 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:30.835817   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:30 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:30.937087   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:31 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:31.037992   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:31 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:31.138608   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:31 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:31.238740   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:31 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:31.339060   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:31 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:31.440307   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:31 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:31.541547   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:31 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:31.642638   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:31 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:31.742863   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:31 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:31.842952   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:31 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:31.943659   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:32 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:32.044343   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:32 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:32.144968   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:32 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:32.245630   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:32 kubernetes-upgrade-024716 kubelet[13380]: E0114 10:56:32.346435   13380 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-024716\" not found"
	Jan 14 10:56:32 kubernetes-upgrade-024716 kubelet[13380]: I0114 10:56:32.456865   13380 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-024716"
	Jan 14 10:56:32 kubernetes-upgrade-024716 kubelet[13380]: I0114 10:56:32.456957   13380 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-024716"
	Jan 14 10:56:33 kubernetes-upgrade-024716 kubelet[13380]: I0114 10:56:33.161536   13380 apiserver.go:52] "Watching apiserver"
	Jan 14 10:56:33 kubernetes-upgrade-024716 kubelet[13380]: I0114 10:56:33.254300   13380 reconciler.go:169] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-024716 -n kubernetes-upgrade-024716
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-024716 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-024716 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-024716 describe pod storage-provisioner: exit status 1 (53.188334ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-024716 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-024716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-024716
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-024716: (3.138761944s)
--- FAIL: TestKubernetesUpgrade (566.35s)

                                                
                                    
x
+
TestMissingContainerUpgrade (76.95s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2409690989.exe start -p missing-upgrade-024559 --memory=2200 --driver=docker 
E0114 02:47:02.209485    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2409690989.exe start -p missing-upgrade-024559 --memory=2200 --driver=docker : exit status 78 (1m1.174682271s)

                                                
                                                
-- stdout --
	! [missing-upgrade-024559] minikube v1.9.1 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-024559
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-024559" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.28.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.28.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.56 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.23 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 6.67 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 12.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 21.66 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 29.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 34.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 43.17 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.64 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 58.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 67.39 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 77.45 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 88.12 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 98.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 111.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 123.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 131.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 141.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 150.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 158.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 166.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 175.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 187.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 198.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 209.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 221.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 230.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 236.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 253.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 262.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 270.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 282.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 291.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 300.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 308.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 321.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 329.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 339.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 350.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 359.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 368.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 380.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 387.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 395.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 402.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 411.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 416.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 436.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 448.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 457.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 468.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 476.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 480.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 495.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 512.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 517.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 525.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 532.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 10:46:41.840506906 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-024559" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 10:47:01.267132296 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2409690989.exe start -p missing-upgrade-024559 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2409690989.exe start -p missing-upgrade-024559 --memory=2200 --driver=docker : exit status 70 (3.935275422s)

                                                
                                                
-- stdout --
	* [missing-upgrade-024559] minikube v1.9.1 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-024559
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-024559" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2409690989.exe start -p missing-upgrade-024559 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2409690989.exe start -p missing-upgrade-024559 --memory=2200 --driver=docker : exit status 70 (4.013182891s)

                                                
                                                
-- stdout --
	* [missing-upgrade-024559] minikube v1.9.1 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-024559
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-024559" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-01-14 02:47:13.581526 -0800 PST m=+2512.868914165
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-024559
helpers_test.go:235: (dbg) docker inspect missing-upgrade-024559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b22efb14c1f2fa6069785570b7a053c12b452ae4102d05cae04ca7f419228324",
	        "Created": "2023-01-14T10:46:49.997855077Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156420,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T10:46:50.234542184Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/b22efb14c1f2fa6069785570b7a053c12b452ae4102d05cae04ca7f419228324/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b22efb14c1f2fa6069785570b7a053c12b452ae4102d05cae04ca7f419228324/hostname",
	        "HostsPath": "/var/lib/docker/containers/b22efb14c1f2fa6069785570b7a053c12b452ae4102d05cae04ca7f419228324/hosts",
	        "LogPath": "/var/lib/docker/containers/b22efb14c1f2fa6069785570b7a053c12b452ae4102d05cae04ca7f419228324/b22efb14c1f2fa6069785570b7a053c12b452ae4102d05cae04ca7f419228324-json.log",
	        "Name": "/missing-upgrade-024559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-024559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/72e2bff62750ea973e6c8a599d1bf2edab3d9e4b6ebc730d5e662c448a399637-init/diff:/var/lib/docker/overlay2/d9e0372027d4333f5dfc0260dc68a1ef91bfcf7d5f5dff141a717545493ad065/diff:/var/lib/docker/overlay2/46081d4fb25a1a237bec1d8b89142bd952d9a8ff642dc579eeb356856b3dc8f6/diff:/var/lib/docker/overlay2/eb8ce44283821701025459292279f6b660732954d70c28df16a5e34c5d1ee092/diff:/var/lib/docker/overlay2/374900f896c926260c57188b12e8004bbdeb9a35b372541753d25218b7ca4a49/diff:/var/lib/docker/overlay2/64d9423cd618ede75c9ec11b1e40f8936a2d9da7469dcd147c73b0c79314810e/diff:/var/lib/docker/overlay2/f09f1c250550837a5a081fdfb60d64a85956bc878c521067af6674d12588d9c6/diff:/var/lib/docker/overlay2/1bc8eb0fac0a908b4186d4606052ee443a47977f5b3c3b24901f17432bca2123/diff:/var/lib/docker/overlay2/e4a65e0de54c70dd035902d2b48fd4522d689efc3d4adb1d6a7c7e3c66663b75/diff:/var/lib/docker/overlay2/d1dd7d1bcda554415df7eb487329ae4cd88b1dafd1ca8370c77359bd9e890fc4/diff:/var/lib/docker/overlay2/c3abbd
bbc845f336a493a947a15ae79e8a7332c0a95294c02182b983a80ada3c/diff:/var/lib/docker/overlay2/8c241dc16e96a8e06e950dae445df572495725ed987a37a60a0d0aa6356af65f/diff:/var/lib/docker/overlay2/4346c678d640d3c7b956f2ac5e9a9b79402dc7681c38c3b1f39282863407d785/diff:/var/lib/docker/overlay2/961f1824ebaaf19cfbf85968119412950aeb0f2e10fc4f27696105167c943f97/diff:/var/lib/docker/overlay2/351c2895fcfe559e893dc1b96a95b91a611bebbe4185fad4b356163e0d53e0a4/diff:/var/lib/docker/overlay2/75541cc507d5ef571abe82555fbeabb82cda190d37788579def271183baef953/diff:/var/lib/docker/overlay2/262ff3966059c3410227ffadb65e17ec76f47f8ca8af6c5b335324c0e8dc82f1/diff:/var/lib/docker/overlay2/9e45f365e6a8d120e6856b0f4ee4ef3d08632d0c2030373b57671008238f4c9f/diff:/var/lib/docker/overlay2/9d313d6638db8ca67fde08c40aabf92765fcd43884b3b16527b5e97ee9481ba5/diff:/var/lib/docker/overlay2/1dd6c3e9fb55b6e0e8e620ab5f0b619d668e72f21fce99308f0ac9b583353cf7/diff:/var/lib/docker/overlay2/4cbf7e516424b9114144c6b3c498b6615fa96d4c548bfbfd45c308e8bdf992e3/diff:/var/lib/d
ocker/overlay2/39aa6d3378f9b18db0d84015623602a66f5abb424e568220a533eac5821912c6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72e2bff62750ea973e6c8a599d1bf2edab3d9e4b6ebc730d5e662c448a399637/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72e2bff62750ea973e6c8a599d1bf2edab3d9e4b6ebc730d5e662c448a399637/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72e2bff62750ea973e6c8a599d1bf2edab3d9e4b6ebc730d5e662c448a399637/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-024559",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-024559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-024559",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-024559",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-024559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb304962b1c2402fba87ca19b54446045fd09aab86aab03ae99639e3d72bb11d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52406"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52407"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/eb304962b1c2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "241a3311a2dc347630065d2863114c191dc04c366841fa772183d626ad5bc406",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "5d3333edc5816d83f58b9a16a9d3c8b9bad2c1b09e956286348d6a0d2906ce7d",
	                    "EndpointID": "241a3311a2dc347630065d2863114c191dc04c366841fa772183d626ad5bc406",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-024559 -n missing-upgrade-024559
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-024559 -n missing-upgrade-024559: exit status 6 (388.310225ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 02:47:14.017579   12695 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-024559" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-024559" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-024559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-024559
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-024559: (2.319911839s)
--- FAIL: TestMissingContainerUpgrade (76.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (59.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1159458649.exe start -p stopped-upgrade-024857 --memory=2200 --vm-driver=docker 
E0114 02:49:19.838266    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 02:49:20.336101    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1159458649.exe start -p stopped-upgrade-024857 --memory=2200 --vm-driver=docker : exit status 70 (47.325978097s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-024857] minikube v1.9.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2722838027
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 10:49:26.246498006 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-024857" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 10:49:45.825499135 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-024857", then "minikube start -p stopped-upgrade-024857 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 132.32 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.17 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 7.59 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.30 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 23.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 36.19 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 45.62 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 53.73 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 65.95 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 75.38 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 87.33 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 99.98 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 109.04 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 119.93 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 128.99 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.60 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 150.13 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 162.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 170.79 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 183.74 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 192.13 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 204.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 217.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 226.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 238.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 248.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 259.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 268.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 277.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 290.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 311.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 324.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 333.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 345.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 358.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 367.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 380.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 392.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 396.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 409.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 418.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 430.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 441.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 450.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 463.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 474.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 480.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 493.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 506.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 514.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 527.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 538.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 10:49:45.825499135 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1159458649.exe start -p stopped-upgrade-024857 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1159458649.exe start -p stopped-upgrade-024857 --memory=2200 --vm-driver=docker : exit status 70 (4.412904262s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-024857] minikube v1.9.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3547165380
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-024857" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1159458649.exe start -p stopped-upgrade-024857 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.1159458649.exe start -p stopped-upgrade-024857 --memory=2200 --vm-driver=docker : exit status 70 (4.29379429s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-024857] minikube v1.9.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig299489289
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-024857" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (59.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (252.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-030235 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0114 03:02:46.612382    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:02:46.617493    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:02:46.628397    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:02:46.650573    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:02:46.690727    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:02:46.771190    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:02:46.931325    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:02:47.251491    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:02:47.892244    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:02:49.174334    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:02:51.735990    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:02:56.857642    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:02:58.414158    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 03:03:07.097987    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-030235 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m12.220281524s)

                                                
                                                
-- stdout --
	* [old-k8s-version-030235] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-030235 in cluster old-k8s-version-030235
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.21 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 03:02:35.165501   16696 out.go:296] Setting OutFile to fd 1 ...
	I0114 03:02:35.165693   16696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:02:35.165700   16696 out.go:309] Setting ErrFile to fd 2...
	I0114 03:02:35.165704   16696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:02:35.165818   16696 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 03:02:35.166394   16696 out.go:303] Setting JSON to false
	I0114 03:02:35.186253   16696 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":3729,"bootTime":1673690426,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 03:02:35.186385   16696 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 03:02:35.208944   16696 out.go:177] * [old-k8s-version-030235] minikube v1.28.0 on Darwin 13.0.1
	I0114 03:02:35.266880   16696 notify.go:220] Checking for updates...
	I0114 03:02:35.290828   16696 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 03:02:35.349725   16696 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 03:02:35.408315   16696 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 03:02:35.451456   16696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 03:02:35.525793   16696 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	I0114 03:02:35.548136   16696 config.go:180] Loaded profile config "kubenet-024325": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:02:35.548231   16696 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 03:02:35.661849   16696 docker.go:138] docker version: linux-20.10.21
	I0114 03:02:35.662022   16696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 03:02:35.836461   16696 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 11:02:35.718711008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 03:02:35.859162   16696 out.go:177] * Using the docker driver based on user configuration
	I0114 03:02:35.895011   16696 start.go:294] selected driver: docker
	I0114 03:02:35.895027   16696 start.go:838] validating driver "docker" against <nil>
	I0114 03:02:35.895048   16696 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 03:02:35.897885   16696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 03:02:36.071417   16696 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 11:02:35.956989466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 03:02:36.071608   16696 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 03:02:36.071782   16696 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 03:02:36.095960   16696 out.go:177] * Using Docker Desktop driver with root privileges
	I0114 03:02:36.117177   16696 cni.go:95] Creating CNI manager for ""
	I0114 03:02:36.117198   16696 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:02:36.117219   16696 start_flags.go:319] config:
	{Name:old-k8s-version-030235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-030235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:02:36.160110   16696 out.go:177] * Starting control plane node old-k8s-version-030235 in cluster old-k8s-version-030235
	I0114 03:02:36.181105   16696 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 03:02:36.202023   16696 out.go:177] * Pulling base image ...
	I0114 03:02:36.244308   16696 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 03:02:36.244352   16696 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 03:02:36.244397   16696 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0114 03:02:36.244414   16696 cache.go:57] Caching tarball of preloaded images
	I0114 03:02:36.244664   16696 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 03:02:36.244687   16696 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0114 03:02:36.245779   16696 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/config.json ...
	I0114 03:02:36.245971   16696 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/config.json: {Name:mk2fcf0f17202a0ae84341ab3578912698f667a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:02:36.304254   16696 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 03:02:36.304282   16696 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 03:02:36.304310   16696 cache.go:193] Successfully downloaded all kic artifacts
	I0114 03:02:36.304364   16696 start.go:364] acquiring machines lock for old-k8s-version-030235: {Name:mk0a4f570c8f2752e6db1ad5a8ffefc98930515a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 03:02:36.304533   16696 start.go:368] acquired machines lock for "old-k8s-version-030235" in 144.647µs
	I0114 03:02:36.304579   16696 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-030235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-030235 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 03:02:36.304667   16696 start.go:125] createHost starting for "" (driver="docker")
	I0114 03:02:36.326607   16696 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0114 03:02:36.327009   16696 start.go:159] libmachine.API.Create for "old-k8s-version-030235" (driver="docker")
	I0114 03:02:36.327073   16696 client.go:168] LocalClient.Create starting
	I0114 03:02:36.327278   16696 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem
	I0114 03:02:36.327367   16696 main.go:134] libmachine: Decoding PEM data...
	I0114 03:02:36.327400   16696 main.go:134] libmachine: Parsing certificate...
	I0114 03:02:36.327531   16696 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem
	I0114 03:02:36.327601   16696 main.go:134] libmachine: Decoding PEM data...
	I0114 03:02:36.327619   16696 main.go:134] libmachine: Parsing certificate...
	I0114 03:02:36.328486   16696 cli_runner.go:164] Run: docker network inspect old-k8s-version-030235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0114 03:02:36.390198   16696 cli_runner.go:211] docker network inspect old-k8s-version-030235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0114 03:02:36.390299   16696 network_create.go:280] running [docker network inspect old-k8s-version-030235] to gather additional debugging logs...
	I0114 03:02:36.390317   16696 cli_runner.go:164] Run: docker network inspect old-k8s-version-030235
	W0114 03:02:36.448533   16696 cli_runner.go:211] docker network inspect old-k8s-version-030235 returned with exit code 1
	I0114 03:02:36.448557   16696 network_create.go:283] error running [docker network inspect old-k8s-version-030235]: docker network inspect old-k8s-version-030235: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-030235
	I0114 03:02:36.448574   16696 network_create.go:285] output of [docker network inspect old-k8s-version-030235]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-030235
	
	** /stderr **
	I0114 03:02:36.448671   16696 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 03:02:36.509179   16696 network.go:277] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0007e2710] misses:0}
	I0114 03:02:36.509221   16696 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 03:02:36.509236   16696 network_create.go:123] attempt to create docker network old-k8s-version-030235 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0114 03:02:36.509338   16696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-030235 old-k8s-version-030235
	W0114 03:02:36.569871   16696 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-030235 old-k8s-version-030235 returned with exit code 1
	W0114 03:02:36.569909   16696 network_create.go:115] failed to create docker network old-k8s-version-030235 192.168.49.0/24, will retry: subnet is taken
	I0114 03:02:36.570195   16696 network.go:268] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007e2710] amended:false}} dirty:map[] misses:0}
	I0114 03:02:36.570210   16696 network.go:213] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 03:02:36.570420   16696 network.go:277] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007e2710] amended:true}} dirty:map[192.168.49.0:0xc0007e2710 192.168.58.0:0xc0003c4500] misses:0}
	I0114 03:02:36.570432   16696 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 03:02:36.570441   16696 network_create.go:123] attempt to create docker network old-k8s-version-030235 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0114 03:02:36.570529   16696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-030235 old-k8s-version-030235
	W0114 03:02:36.626996   16696 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-030235 old-k8s-version-030235 returned with exit code 1
	W0114 03:02:36.627037   16696 network_create.go:115] failed to create docker network old-k8s-version-030235 192.168.58.0/24, will retry: subnet is taken
	I0114 03:02:36.627306   16696 network.go:268] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007e2710] amended:true}} dirty:map[192.168.49.0:0xc0007e2710 192.168.58.0:0xc0003c4500] misses:1}
	I0114 03:02:36.627324   16696 network.go:213] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 03:02:36.627542   16696 network.go:277] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007e2710] amended:true}} dirty:map[192.168.49.0:0xc0007e2710 192.168.58.0:0xc0003c4500 192.168.67.0:0xc0011540c8] misses:1}
	I0114 03:02:36.627554   16696 network.go:210] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 03:02:36.627572   16696 network_create.go:123] attempt to create docker network old-k8s-version-030235 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0114 03:02:36.627675   16696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-030235 old-k8s-version-030235
	W0114 03:02:36.688381   16696 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-030235 old-k8s-version-030235 returned with exit code 1
	W0114 03:02:36.688420   16696 network_create.go:115] failed to create docker network old-k8s-version-030235 192.168.67.0/24, will retry: subnet is taken
	I0114 03:02:36.688675   16696 network.go:268] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007e2710] amended:true}} dirty:map[192.168.49.0:0xc0007e2710 192.168.58.0:0xc0003c4500 192.168.67.0:0xc0011540c8] misses:2}
	I0114 03:02:36.688700   16696 network.go:213] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 03:02:36.688916   16696 network.go:277] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0007e2710] amended:true}} dirty:map[192.168.49.0:0xc0007e2710 192.168.58.0:0xc0003c4500 192.168.67.0:0xc0011540c8 192.168.76.0:0xc0003c4538] misses:2}
	I0114 03:02:36.688930   16696 network.go:210] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 03:02:36.688939   16696 network_create.go:123] attempt to create docker network old-k8s-version-030235 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0114 03:02:36.689028   16696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-030235 old-k8s-version-030235
	I0114 03:02:36.785000   16696 network_create.go:107] docker network old-k8s-version-030235 192.168.76.0/24 created
	I0114 03:02:36.785037   16696 kic.go:117] calculated static IP "192.168.76.2" for the "old-k8s-version-030235" container
	I0114 03:02:36.785184   16696 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0114 03:02:36.844466   16696 cli_runner.go:164] Run: docker volume create old-k8s-version-030235 --label name.minikube.sigs.k8s.io=old-k8s-version-030235 --label created_by.minikube.sigs.k8s.io=true
	I0114 03:02:36.905671   16696 oci.go:103] Successfully created a docker volume old-k8s-version-030235
	I0114 03:02:36.905835   16696 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-030235-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-030235 --entrypoint /usr/bin/test -v old-k8s-version-030235:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0114 03:02:37.375264   16696 oci.go:107] Successfully prepared a docker volume old-k8s-version-030235
	I0114 03:02:37.375311   16696 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 03:02:37.375326   16696 kic.go:190] Starting extracting preloaded images to volume ...
	I0114 03:02:37.375437   16696 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-030235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0114 03:02:44.260102   16696 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-030235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (6.884548087s)
	I0114 03:02:44.260124   16696 kic.go:199] duration metric: took 6.884745 seconds to extract preloaded images to volume
	I0114 03:02:44.260240   16696 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0114 03:02:44.411594   16696 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-030235 --name old-k8s-version-030235 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-030235 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-030235 --network old-k8s-version-030235 --ip 192.168.76.2 --volume old-k8s-version-030235:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0114 03:02:44.861154   16696 cli_runner.go:164] Run: docker container inspect old-k8s-version-030235 --format={{.State.Running}}
	I0114 03:02:44.928399   16696 cli_runner.go:164] Run: docker container inspect old-k8s-version-030235 --format={{.State.Status}}
	I0114 03:02:45.000351   16696 cli_runner.go:164] Run: docker exec old-k8s-version-030235 stat /var/lib/dpkg/alternatives/iptables
	I0114 03:02:45.119698   16696 oci.go:144] the created container "old-k8s-version-030235" has a running status.
	I0114 03:02:45.119730   16696 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/old-k8s-version-030235/id_rsa...
	I0114 03:02:45.195075   16696 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/old-k8s-version-030235/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0114 03:02:45.308008   16696 cli_runner.go:164] Run: docker container inspect old-k8s-version-030235 --format={{.State.Status}}
	I0114 03:02:45.371711   16696 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0114 03:02:45.371734   16696 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-030235 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0114 03:02:45.474910   16696 cli_runner.go:164] Run: docker container inspect old-k8s-version-030235 --format={{.State.Status}}
	I0114 03:02:45.533100   16696 machine.go:88] provisioning docker machine ...
	I0114 03:02:45.533145   16696 ubuntu.go:169] provisioning hostname "old-k8s-version-030235"
	I0114 03:02:45.533262   16696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:02:45.599646   16696 main.go:134] libmachine: Using SSH client type: native
	I0114 03:02:45.599850   16696 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53832 <nil> <nil>}
	I0114 03:02:45.599869   16696 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-030235 && echo "old-k8s-version-030235" | sudo tee /etc/hostname
	I0114 03:02:45.731016   16696 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-030235
	
	I0114 03:02:45.731152   16696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:02:45.790834   16696 main.go:134] libmachine: Using SSH client type: native
	I0114 03:02:45.791005   16696 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53832 <nil> <nil>}
	I0114 03:02:45.791020   16696 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-030235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-030235/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-030235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 03:02:45.908083   16696 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 03:02:45.908105   16696 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
	I0114 03:02:45.908126   16696 ubuntu.go:177] setting up certificates
	I0114 03:02:45.908134   16696 provision.go:83] configureAuth start
	I0114 03:02:45.908220   16696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-030235
	I0114 03:02:45.965245   16696 provision.go:138] copyHostCerts
	I0114 03:02:45.965356   16696 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
	I0114 03:02:45.965364   16696 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 03:02:45.965487   16696 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
	I0114 03:02:45.965713   16696 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
	I0114 03:02:45.965719   16696 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 03:02:45.965782   16696 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
	I0114 03:02:45.965940   16696 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
	I0114 03:02:45.965946   16696 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 03:02:45.966007   16696 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
	I0114 03:02:45.966140   16696 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-030235 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-030235]
	I0114 03:02:46.112708   16696 provision.go:172] copyRemoteCerts
	I0114 03:02:46.112779   16696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 03:02:46.112843   16696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:02:46.172286   16696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53832 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/old-k8s-version-030235/id_rsa Username:docker}
	I0114 03:02:46.259115   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 03:02:46.276445   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0114 03:02:46.293573   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0114 03:02:46.310987   16696 provision.go:86] duration metric: configureAuth took 402.837849ms
	I0114 03:02:46.311004   16696 ubuntu.go:193] setting minikube options for container-runtime
	I0114 03:02:46.311159   16696 config.go:180] Loaded profile config "old-k8s-version-030235": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0114 03:02:46.311236   16696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:02:46.370686   16696 main.go:134] libmachine: Using SSH client type: native
	I0114 03:02:46.370849   16696 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53832 <nil> <nil>}
	I0114 03:02:46.370863   16696 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 03:02:46.489526   16696 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 03:02:46.489547   16696 ubuntu.go:71] root file system type: overlay
	I0114 03:02:46.489675   16696 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 03:02:46.489767   16696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:02:46.547705   16696 main.go:134] libmachine: Using SSH client type: native
	I0114 03:02:46.547857   16696 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53832 <nil> <nil>}
	I0114 03:02:46.547906   16696 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 03:02:46.674943   16696 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 03:02:46.675070   16696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:02:46.733223   16696 main.go:134] libmachine: Using SSH client type: native
	I0114 03:02:46.733399   16696 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 53832 <nil> <nil>}
	I0114 03:02:46.733413   16696 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 03:02:47.318649   16696 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-25 18:00:04.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 11:02:46.672100181 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0114 03:02:47.318676   16696 machine.go:91] provisioned docker machine in 1.785544029s
	I0114 03:02:47.318687   16696 client.go:171] LocalClient.Create took 10.99152392s
	I0114 03:02:47.318705   16696 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-030235" took 10.991617979s
	I0114 03:02:47.318715   16696 start.go:300] post-start starting for "old-k8s-version-030235" (driver="docker")
	I0114 03:02:47.318721   16696 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 03:02:47.318796   16696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 03:02:47.318860   16696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:02:47.378808   16696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53832 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/old-k8s-version-030235/id_rsa Username:docker}
	I0114 03:02:47.466584   16696 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 03:02:47.470217   16696 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 03:02:47.470233   16696 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 03:02:47.470240   16696 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 03:02:47.470247   16696 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 03:02:47.470259   16696 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
	I0114 03:02:47.470343   16696 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
	I0114 03:02:47.470509   16696 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
	I0114 03:02:47.470694   16696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 03:02:47.478000   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
	I0114 03:02:47.494992   16696 start.go:303] post-start completed in 176.259969ms
	I0114 03:02:47.495560   16696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-030235
	I0114 03:02:47.553629   16696 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/config.json ...
	I0114 03:02:47.554093   16696 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 03:02:47.554163   16696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:02:47.612526   16696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53832 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/old-k8s-version-030235/id_rsa Username:docker}
	I0114 03:02:47.696942   16696 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 03:02:47.701781   16696 start.go:128] duration metric: createHost completed in 11.397017389s
	I0114 03:02:47.701800   16696 start.go:83] releasing machines lock for "old-k8s-version-030235", held for 11.397169191s
	I0114 03:02:47.701910   16696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-030235
	I0114 03:02:47.759059   16696 ssh_runner.go:195] Run: cat /version.json
	I0114 03:02:47.759063   16696 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0114 03:02:47.759134   16696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:02:47.759161   16696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:02:47.822172   16696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53832 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/old-k8s-version-030235/id_rsa Username:docker}
	I0114 03:02:47.823880   16696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53832 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/old-k8s-version-030235/id_rsa Username:docker}
	I0114 03:02:48.179241   16696 ssh_runner.go:195] Run: systemctl --version
	I0114 03:02:48.183998   16696 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 03:02:48.193605   16696 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 03:02:48.193680   16696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 03:02:48.203049   16696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 03:02:48.216386   16696 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 03:02:48.282870   16696 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 03:02:48.350776   16696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 03:02:48.423813   16696 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 03:02:48.622658   16696 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 03:02:48.653145   16696 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 03:02:48.727806   16696 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.21 ...
	I0114 03:02:48.727953   16696 cli_runner.go:164] Run: docker exec -t old-k8s-version-030235 dig +short host.docker.internal
	I0114 03:02:48.839359   16696 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 03:02:48.839483   16696 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 03:02:48.843880   16696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 03:02:48.853792   16696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:02:48.911804   16696 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 03:02:48.911885   16696 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 03:02:48.936391   16696 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0114 03:02:48.936411   16696 docker.go:543] Images already preloaded, skipping extraction
	I0114 03:02:48.936622   16696 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 03:02:48.960046   16696 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0114 03:02:48.960065   16696 cache_images.go:84] Images are preloaded, skipping loading
	I0114 03:02:48.960170   16696 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 03:02:49.038329   16696 cni.go:95] Creating CNI manager for ""
	I0114 03:02:49.038349   16696 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:02:49.038366   16696 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 03:02:49.038381   16696 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-030235 NodeName:old-k8s-version-030235 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 03:02:49.038507   16696 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-030235"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-030235
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 03:02:49.038589   16696 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-030235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-030235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 03:02:49.038658   16696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0114 03:02:49.046720   16696 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 03:02:49.046785   16696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 03:02:49.054485   16696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0114 03:02:49.068875   16696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 03:02:49.083596   16696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0114 03:02:49.096655   16696 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0114 03:02:49.100562   16696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 03:02:49.110481   16696 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235 for IP: 192.168.76.2
	I0114 03:02:49.110607   16696 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
	I0114 03:02:49.110661   16696 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
	I0114 03:02:49.110713   16696 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/client.key
	I0114 03:02:49.110732   16696 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/client.crt with IP's: []
	I0114 03:02:49.313517   16696 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/client.crt ...
	I0114 03:02:49.313545   16696 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/client.crt: {Name:mkf794a026450812db46a94fc137599df33bee27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:02:49.313895   16696 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/client.key ...
	I0114 03:02:49.313903   16696 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/client.key: {Name:mk76ca06fbe3cc4f494315d4688c4ce01b6299ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:02:49.314118   16696 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.key.31bdca25
	I0114 03:02:49.314137   16696 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0114 03:02:49.453171   16696 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.crt.31bdca25 ...
	I0114 03:02:49.453191   16696 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.crt.31bdca25: {Name:mk695e60715bda8f4ffa874f8bb8397b2de81bdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:02:49.453501   16696 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.key.31bdca25 ...
	I0114 03:02:49.453509   16696 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.key.31bdca25: {Name:mk9c9f965c41d06f2db3d051be5d9827376b5d56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:02:49.453918   16696 certs.go:320] copying /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.crt
	I0114 03:02:49.454297   16696 certs.go:324] copying /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.key
	I0114 03:02:49.454512   16696 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/proxy-client.key
	I0114 03:02:49.454530   16696 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/proxy-client.crt with IP's: []
	I0114 03:02:49.545281   16696 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/proxy-client.crt ...
	I0114 03:02:49.545301   16696 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/proxy-client.crt: {Name:mk06c3de19a4f09ce2f2ef54556c398368712d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:02:49.545687   16696 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/proxy-client.key ...
	I0114 03:02:49.545697   16696 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/proxy-client.key: {Name:mkeab6e4c7ee1921b2de275ae7671967aced2980 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:02:49.546296   16696 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
	W0114 03:02:49.546346   16696 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
	I0114 03:02:49.546372   16696 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 03:02:49.546425   16696 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
	I0114 03:02:49.546468   16696 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
	I0114 03:02:49.546506   16696 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
	I0114 03:02:49.546591   16696 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
	I0114 03:02:49.547113   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 03:02:49.568587   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0114 03:02:49.588877   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 03:02:49.609799   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0114 03:02:49.629539   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 03:02:49.649724   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 03:02:49.670458   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 03:02:49.690006   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 03:02:49.712107   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
	I0114 03:02:49.734001   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 03:02:49.756889   16696 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
	I0114 03:02:49.778791   16696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 03:02:49.793955   16696 ssh_runner.go:195] Run: openssl version
	I0114 03:02:49.799557   16696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
	I0114 03:02:49.807772   16696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
	I0114 03:02:49.812053   16696 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 03:02:49.812104   16696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
	I0114 03:02:49.817527   16696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 03:02:49.825953   16696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 03:02:49.836281   16696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:02:49.842662   16696 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:02:49.842754   16696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:02:49.850632   16696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 03:02:49.862490   16696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
	I0114 03:02:49.872412   16696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
	I0114 03:02:49.877509   16696 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 03:02:49.877575   16696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
	I0114 03:02:49.884235   16696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
	I0114 03:02:49.895697   16696 kubeadm.go:396] StartCluster: {Name:old-k8s-version-030235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-030235 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:02:49.895867   16696 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 03:02:49.920140   16696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 03:02:49.928868   16696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 03:02:49.939837   16696 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 03:02:49.939927   16696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 03:02:49.950975   16696 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 03:02:49.951018   16696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 03:02:50.180266   16696 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0114 03:02:50.273754   16696 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0114 03:02:50.373500   16696 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 03:04:47.765858   16696 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0114 03:04:47.766009   16696 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0114 03:04:47.770700   16696 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0114 03:04:47.770777   16696 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 03:04:47.770876   16696 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 03:04:47.771032   16696 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 03:04:47.771157   16696 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 03:04:47.771295   16696 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 03:04:47.771416   16696 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 03:04:47.771489   16696 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0114 03:04:47.771579   16696 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 03:04:47.796984   16696 out.go:204]   - Generating certificates and keys ...
	I0114 03:04:47.797047   16696 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 03:04:47.797101   16696 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 03:04:47.797159   16696 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0114 03:04:47.797225   16696 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0114 03:04:47.797299   16696 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0114 03:04:47.797342   16696 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0114 03:04:47.797390   16696 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0114 03:04:47.797499   16696 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-030235 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0114 03:04:47.797554   16696 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0114 03:04:47.797672   16696 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-030235 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0114 03:04:47.797738   16696 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0114 03:04:47.797790   16696 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0114 03:04:47.797841   16696 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0114 03:04:47.797892   16696 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 03:04:47.797927   16696 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 03:04:47.797967   16696 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 03:04:47.798010   16696 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 03:04:47.798060   16696 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 03:04:47.798132   16696 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 03:04:47.837980   16696 out.go:204]   - Booting up control plane ...
	I0114 03:04:47.838134   16696 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 03:04:47.838251   16696 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 03:04:47.838309   16696 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 03:04:47.838370   16696 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 03:04:47.838642   16696 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 03:04:47.838721   16696 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 03:04:47.838835   16696 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:04:47.839051   16696 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:04:47.839145   16696 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:04:47.839348   16696 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:04:47.839427   16696 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:04:47.839574   16696 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:04:47.839635   16696 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:04:47.839812   16696 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:04:47.839940   16696 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:04:47.840199   16696 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:04:47.840210   16696 kubeadm.go:317] 
	I0114 03:04:47.840255   16696 kubeadm.go:317] Unfortunately, an error has occurred:
	I0114 03:04:47.840305   16696 kubeadm.go:317] 	timed out waiting for the condition
	I0114 03:04:47.840314   16696 kubeadm.go:317] 
	I0114 03:04:47.840364   16696 kubeadm.go:317] This error is likely caused by:
	I0114 03:04:47.840452   16696 kubeadm.go:317] 	- The kubelet is not running
	I0114 03:04:47.840653   16696 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0114 03:04:47.840680   16696 kubeadm.go:317] 
	I0114 03:04:47.840879   16696 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0114 03:04:47.840943   16696 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0114 03:04:47.840973   16696 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0114 03:04:47.840985   16696 kubeadm.go:317] 
	I0114 03:04:47.841102   16696 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0114 03:04:47.841242   16696 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0114 03:04:47.841328   16696 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0114 03:04:47.841375   16696 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0114 03:04:47.841513   16696 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0114 03:04:47.841584   16696 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	W0114 03:04:47.841802   16696 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-030235 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-030235 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-030235 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-030235 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0114 03:04:47.841866   16696 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0114 03:04:48.299386   16696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 03:04:48.314103   16696 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 03:04:48.314184   16696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 03:04:48.325014   16696 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 03:04:48.325053   16696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 03:04:48.405264   16696 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0114 03:04:48.405319   16696 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 03:04:48.735759   16696 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 03:04:48.735953   16696 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 03:04:48.736118   16696 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 03:04:48.982356   16696 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 03:04:48.982999   16696 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 03:04:48.990355   16696 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0114 03:04:49.054660   16696 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 03:04:49.075176   16696 out.go:204]   - Generating certificates and keys ...
	I0114 03:04:49.075320   16696 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 03:04:49.075422   16696 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 03:04:49.075524   16696 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 03:04:49.075607   16696 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0114 03:04:49.075674   16696 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0114 03:04:49.075729   16696 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0114 03:04:49.075796   16696 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0114 03:04:49.075860   16696 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0114 03:04:49.075933   16696 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 03:04:49.076007   16696 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 03:04:49.076035   16696 kubeadm.go:317] [certs] Using the existing "sa" key
	I0114 03:04:49.076119   16696 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 03:04:49.248322   16696 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 03:04:49.365288   16696 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 03:04:49.561565   16696 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 03:04:49.737690   16696 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 03:04:49.738417   16696 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 03:04:49.760122   16696 out.go:204]   - Booting up control plane ...
	I0114 03:04:49.760228   16696 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 03:04:49.760371   16696 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 03:04:49.760486   16696 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 03:04:49.760597   16696 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 03:04:49.760733   16696 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 03:05:29.749856   16696 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 03:05:29.751049   16696 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:05:29.751218   16696 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:05:34.752010   16696 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:05:34.752163   16696 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:05:44.754775   16696 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:05:44.754988   16696 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:06:04.756119   16696 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:06:04.756318   16696 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:06:44.758196   16696 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:06:44.758423   16696 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:06:44.758432   16696 kubeadm.go:317] 
	I0114 03:06:44.758479   16696 kubeadm.go:317] Unfortunately, an error has occurred:
	I0114 03:06:44.758531   16696 kubeadm.go:317] 	timed out waiting for the condition
	I0114 03:06:44.758548   16696 kubeadm.go:317] 
	I0114 03:06:44.758583   16696 kubeadm.go:317] This error is likely caused by:
	I0114 03:06:44.758618   16696 kubeadm.go:317] 	- The kubelet is not running
	I0114 03:06:44.758767   16696 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0114 03:06:44.758788   16696 kubeadm.go:317] 
	I0114 03:06:44.758925   16696 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0114 03:06:44.758966   16696 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0114 03:06:44.758999   16696 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0114 03:06:44.759005   16696 kubeadm.go:317] 
	I0114 03:06:44.759093   16696 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0114 03:06:44.759176   16696 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0114 03:06:44.759273   16696 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0114 03:06:44.759332   16696 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0114 03:06:44.759425   16696 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0114 03:06:44.759472   16696 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0114 03:06:44.762165   16696 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0114 03:06:44.762273   16696 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0114 03:06:44.762361   16696 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 03:06:44.762428   16696 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0114 03:06:44.762492   16696 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0114 03:06:44.762516   16696 kubeadm.go:398] StartCluster complete in 3m54.876027494s
	I0114 03:06:44.762614   16696 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:06:44.785263   16696 logs.go:274] 0 containers: []
	W0114 03:06:44.785275   16696 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:06:44.785360   16696 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:06:44.809230   16696 logs.go:274] 0 containers: []
	W0114 03:06:44.809243   16696 logs.go:276] No container was found matching "etcd"
	I0114 03:06:44.809332   16696 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:06:44.833287   16696 logs.go:274] 0 containers: []
	W0114 03:06:44.833300   16696 logs.go:276] No container was found matching "coredns"
	I0114 03:06:44.833382   16696 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:06:44.856976   16696 logs.go:274] 0 containers: []
	W0114 03:06:44.856989   16696 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:06:44.857077   16696 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:06:44.879880   16696 logs.go:274] 0 containers: []
	W0114 03:06:44.879894   16696 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:06:44.879980   16696 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:06:44.902794   16696 logs.go:274] 0 containers: []
	W0114 03:06:44.902806   16696 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:06:44.902887   16696 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:06:44.926990   16696 logs.go:274] 0 containers: []
	W0114 03:06:44.927002   16696 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:06:44.927091   16696 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:06:44.950334   16696 logs.go:274] 0 containers: []
	W0114 03:06:44.950351   16696 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:06:44.950362   16696 logs.go:123] Gathering logs for container status ...
	I0114 03:06:44.950371   16696 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:06:47.000544   16696 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050146381s)
	I0114 03:06:47.000690   16696 logs.go:123] Gathering logs for kubelet ...
	I0114 03:06:47.000697   16696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:06:47.039637   16696 logs.go:123] Gathering logs for dmesg ...
	I0114 03:06:47.039651   16696 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:06:47.051768   16696 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:06:47.051782   16696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:06:47.106488   16696 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:06:47.106500   16696 logs.go:123] Gathering logs for Docker ...
	I0114 03:06:47.106506   16696 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0114 03:06:47.122012   16696 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0114 03:06:47.122033   16696 out.go:239] * 
	* 
	W0114 03:06:47.122153   16696 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 03:06:47.122166   16696 out.go:239] * 
	* 
	W0114 03:06:47.122813   16696 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 03:06:47.185359   16696 out.go:177] 
	W0114 03:06:47.228428   16696 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 03:06:47.228500   16696 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0114 03:06:47.228543   16696 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0114 03:06:47.249275   16696 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-030235 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-030235
helpers_test.go:235: (dbg) docker inspect old-k8s-version-030235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942",
	        "Created": "2023-01-14T11:02:44.471910321Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256371,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T11:02:44.850584543Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hostname",
	        "HostsPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hosts",
	        "LogPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942-json.log",
	        "Name": "/old-k8s-version-030235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-030235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-030235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408-init/diff:/var/lib/docker/overlay2/74c9e0d36b5b0c73e7df7f4bce3bd0c3d02cf9dc383bffd6fbcff44769e0e62a/diff:/var/lib/docker/overlay2/ba601a6c163e2d067928a6364b090a9785c3dd2470d90823ce10e62a47aa569f/diff:/var/lib/docker/overlay2/80b54fffffd853e7ba8f14b1c1ac90a8b75fb31aafab2d53fe628cb592a95844/diff:/var/lib/docker/overlay2/02213d03e53450db4a2d492831eba720749d97435157430d240b760477b64c78/diff:/var/lib/docker/overlay2/e3727b5662aa5fdeeef9053112ad90fb2f9aaecbfeeddefa3efb066881ae1677/diff:/var/lib/docker/overlay2/685adc0695be0cb9862d43898ceae6e6a36c3cc98f04bc25e314797bed3b1d95/diff:/var/lib/docker/overlay2/7e133e132419c5ad6565f89b3ecfdf2c9fa038e5b9c39fe81c1269cfb6bb0d22/diff:/var/lib/docker/overlay2/c4d27ebf7e050a3aee0acccdadb92fc9390befadef2b0b13b9ebe87a2af3ef50/diff:/var/lib/docker/overlay2/0f07a86eba9c199451031724816d33cb5d2e19c401514edd8c1e392fd795f1e1/diff:/var/lib/docker/overlay2/a51cfe
8ee6145a30d356888e940bfdda67bc55c29f3972b35ae93dd989943b1c/diff:/var/lib/docker/overlay2/b155ac1a426201afe2af9fba8a7ebbecd3d8271f8613d0f53dac7bb190bc977f/diff:/var/lib/docker/overlay2/7c5cec64dde89a12b95bb1a0bca411b06b69201cfdb3cc4b46cb87a5bcff9a7f/diff:/var/lib/docker/overlay2/dd54bb055fc70a41daa3f3e950f4bdadd925db2c588d7d831edb4cbb176d30c7/diff:/var/lib/docker/overlay2/f58b39c756189e32d5b9c66b5c3861eabf5ab01ebc6179fec7210d414762bf45/diff:/var/lib/docker/overlay2/6458e00e4b79399a4860e78a572cd21fd47cbca2a54d189f34bd4a438145a6f5/diff:/var/lib/docker/overlay2/66427e9f49ff5383f9f819513857efb87ee3f880df33a86ac46ebc140ff172ed/diff:/var/lib/docker/overlay2/33f03d40d23c6a829c43633ba96c4058fbf09a4cf912eb51e0ca23a65574b0a7/diff:/var/lib/docker/overlay2/e68584e2b5a5a18fbd6edeeba6d80fe43e2199775b520878ca842d463078a2d1/diff:/var/lib/docker/overlay2/a2bfe134a89cb821f2c8e5ec6b42888d30fac6a9ed1aa4853476bb33cfe2e157/diff:/var/lib/docker/overlay2/f55951d7e041b300f9842916d51648285b79860a132d032d3c23b80af7c280fa/diff:/var/lib/d
ocker/overlay2/76cb0b8d6987165c472c0c9d54491045539294d203577a4ed7fac7f7cbbf0322/diff:/var/lib/docker/overlay2/a8f6d057d4938258302dd54e9a2e99732b4a2ac5c869366e93983e3e8890d432/diff:/var/lib/docker/overlay2/16bf4a461f9fe0edba90225f752527e534469b1bfbeb5bca6315512786340bfe/diff:/var/lib/docker/overlay2/2d022a51ddd598853537ff8fbeca5b94beff9d5d7e6ca81ffe011aa35121268a/diff:/var/lib/docker/overlay2/e30d56ebfba93be441f305b1938dd2d0f847f649922524ebef1fbe3e4b3b4bf9/diff:/var/lib/docker/overlay2/12df07bd2576a7b97f383aa3fcb2535f75a901953859063d9b65944d2dd0b152/diff:/var/lib/docker/overlay2/79e70748fe1267851a900b8bca2ab4e0b34e8163714fc440602d9e0273c93421/diff:/var/lib/docker/overlay2/c4fa6441d4ff7ce1be2072a8f61c5c495ff1785d9fee891191262b893a6eff63/diff:/var/lib/docker/overlay2/748980353d2fab0e6498a85b0c558d9eb7f34703302b21298c310b98dcf4d6f9/diff:/var/lib/docker/overlay2/48f823bc2f4741841d95ac4706f52fe9d01883bce998d5c999bdc363c838b1ee/diff:/var/lib/docker/overlay2/5f4f42c0e92359fc7ea2cf540120bd09407fd1d8dee5b56896919b39d3e
70033/diff:/var/lib/docker/overlay2/4a4066d1d0f42bb48af787d9f9bd115bacffde91f4ca8c20648dad3b25f904b6/diff:/var/lib/docker/overlay2/5f1054f553934c922e4dffc5c3804a5825ed249f7df9c3da31e2081145c8749a/diff:/var/lib/docker/overlay2/a6fe8ece465ba51837f6a88e28c3b571b632f0b223900278ac4a5f5dc0577520/diff:/var/lib/docker/overlay2/ee3e9af6d65fe9d2da423711b90ee171fd35422619c22b802d5fead4f861d921/diff:/var/lib/docker/overlay2/b353b985af8b2f665218f5af5e89cb642745824e2c3b51bfe3aa58c801823c46/diff:/var/lib/docker/overlay2/4411168ee372991c59d386d2ec200449c718a5343f5efa545ad9552a5c349310/diff:/var/lib/docker/overlay2/eeb668637d75a5802fe62d8a71458c68195302676ff09eb1e973d633e24e8588/diff:/var/lib/docker/overlay2/67b1dd580c0c0e994c4fe1233fef817d2c085438c80485c1f2eec64392c7b709/diff:/var/lib/docker/overlay2/1ae992d82b2e0a4c2a667c7d0d9e243efda7ee206e17c862bf093fa976667cc3/diff:/var/lib/docker/overlay2/ab6d393733a7abd2a9bd5612a0cef5adc3cded30c596c212828a8475c9c29779/diff:/var/lib/docker/overlay2/c927272ea82dc6bb318adcf8eb94099eece7af
9df7f454ff921048ba7ce589d2/diff:/var/lib/docker/overlay2/722309d1402eda210190af6c69b6f9998aff66e78e5bbc972ae865d10f0474d7/diff:/var/lib/docker/overlay2/c8a4e498ea2b5c051ced01db75d10e4ed1619bd3acc28c000789b600f8a7e23b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-030235",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-030235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-030235",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b393e8cc2ca9791c036b47e94c66bf02358a9f3b2722a25ce7542ec7cb04d83c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53832"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53833"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53834"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53836"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b393e8cc2ca9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-030235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d977528adcbe",
	                        "old-k8s-version-030235"
	                    ],
	                    "NetworkID": "ab958c8662819925836c350f1443c8060424291379d9dc2b6c89656fa5f7da2a",
	                    "EndpointID": "a44f0767487daa905fe61590638004cbf71d83c3b900131448d936455bc13f58",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235: exit status 6 (398.039032ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 03:06:47.793021   17603 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-030235" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-030235" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (252.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (59.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.109476461s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.109511853s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0114 03:03:42.218503    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.121662728s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.106354313s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0114 03:03:59.163010    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 03:04:00.525962    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:04:00.531304    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:04:00.543624    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:04:00.565823    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:04:00.608065    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:04:00.688229    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:04:00.850369    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.121451237s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 03:04:01.172426    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:04:01.814290    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:04:03.095290    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:04:05.655983    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:04:08.539920    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0114 03:04:10.805509    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.124752755s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 03:04:19.846901    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 03:04:21.045927    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:04:21.464865    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0114 03:04:27.558462    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.118606177s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (59.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-030235 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-030235 create -f testdata/busybox.yaml: exit status 1 (34.769897ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-030235" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-030235 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-030235
helpers_test.go:235: (dbg) docker inspect old-k8s-version-030235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942",
	        "Created": "2023-01-14T11:02:44.471910321Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256371,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T11:02:44.850584543Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hostname",
	        "HostsPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hosts",
	        "LogPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942-json.log",
	        "Name": "/old-k8s-version-030235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-030235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-030235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408-init/diff:/var/lib/docker/overlay2/74c9e0d36b5b0c73e7df7f4bce3bd0c3d02cf9dc383bffd6fbcff44769e0e62a/diff:/var/lib/docker/overlay2/ba601a6c163e2d067928a6364b090a9785c3dd2470d90823ce10e62a47aa569f/diff:/var/lib/docker/overlay2/80b54fffffd853e7ba8f14b1c1ac90a8b75fb31aafab2d53fe628cb592a95844/diff:/var/lib/docker/overlay2/02213d03e53450db4a2d492831eba720749d97435157430d240b760477b64c78/diff:/var/lib/docker/overlay2/e3727b5662aa5fdeeef9053112ad90fb2f9aaecbfeeddefa3efb066881ae1677/diff:/var/lib/docker/overlay2/685adc0695be0cb9862d43898ceae6e6a36c3cc98f04bc25e314797bed3b1d95/diff:/var/lib/docker/overlay2/7e133e132419c5ad6565f89b3ecfdf2c9fa038e5b9c39fe81c1269cfb6bb0d22/diff:/var/lib/docker/overlay2/c4d27ebf7e050a3aee0acccdadb92fc9390befadef2b0b13b9ebe87a2af3ef50/diff:/var/lib/docker/overlay2/0f07a86eba9c199451031724816d33cb5d2e19c401514edd8c1e392fd795f1e1/diff:/var/lib/docker/overlay2/a51cfe
8ee6145a30d356888e940bfdda67bc55c29f3972b35ae93dd989943b1c/diff:/var/lib/docker/overlay2/b155ac1a426201afe2af9fba8a7ebbecd3d8271f8613d0f53dac7bb190bc977f/diff:/var/lib/docker/overlay2/7c5cec64dde89a12b95bb1a0bca411b06b69201cfdb3cc4b46cb87a5bcff9a7f/diff:/var/lib/docker/overlay2/dd54bb055fc70a41daa3f3e950f4bdadd925db2c588d7d831edb4cbb176d30c7/diff:/var/lib/docker/overlay2/f58b39c756189e32d5b9c66b5c3861eabf5ab01ebc6179fec7210d414762bf45/diff:/var/lib/docker/overlay2/6458e00e4b79399a4860e78a572cd21fd47cbca2a54d189f34bd4a438145a6f5/diff:/var/lib/docker/overlay2/66427e9f49ff5383f9f819513857efb87ee3f880df33a86ac46ebc140ff172ed/diff:/var/lib/docker/overlay2/33f03d40d23c6a829c43633ba96c4058fbf09a4cf912eb51e0ca23a65574b0a7/diff:/var/lib/docker/overlay2/e68584e2b5a5a18fbd6edeeba6d80fe43e2199775b520878ca842d463078a2d1/diff:/var/lib/docker/overlay2/a2bfe134a89cb821f2c8e5ec6b42888d30fac6a9ed1aa4853476bb33cfe2e157/diff:/var/lib/docker/overlay2/f55951d7e041b300f9842916d51648285b79860a132d032d3c23b80af7c280fa/diff:/var/lib/d
ocker/overlay2/76cb0b8d6987165c472c0c9d54491045539294d203577a4ed7fac7f7cbbf0322/diff:/var/lib/docker/overlay2/a8f6d057d4938258302dd54e9a2e99732b4a2ac5c869366e93983e3e8890d432/diff:/var/lib/docker/overlay2/16bf4a461f9fe0edba90225f752527e534469b1bfbeb5bca6315512786340bfe/diff:/var/lib/docker/overlay2/2d022a51ddd598853537ff8fbeca5b94beff9d5d7e6ca81ffe011aa35121268a/diff:/var/lib/docker/overlay2/e30d56ebfba93be441f305b1938dd2d0f847f649922524ebef1fbe3e4b3b4bf9/diff:/var/lib/docker/overlay2/12df07bd2576a7b97f383aa3fcb2535f75a901953859063d9b65944d2dd0b152/diff:/var/lib/docker/overlay2/79e70748fe1267851a900b8bca2ab4e0b34e8163714fc440602d9e0273c93421/diff:/var/lib/docker/overlay2/c4fa6441d4ff7ce1be2072a8f61c5c495ff1785d9fee891191262b893a6eff63/diff:/var/lib/docker/overlay2/748980353d2fab0e6498a85b0c558d9eb7f34703302b21298c310b98dcf4d6f9/diff:/var/lib/docker/overlay2/48f823bc2f4741841d95ac4706f52fe9d01883bce998d5c999bdc363c838b1ee/diff:/var/lib/docker/overlay2/5f4f42c0e92359fc7ea2cf540120bd09407fd1d8dee5b56896919b39d3e
70033/diff:/var/lib/docker/overlay2/4a4066d1d0f42bb48af787d9f9bd115bacffde91f4ca8c20648dad3b25f904b6/diff:/var/lib/docker/overlay2/5f1054f553934c922e4dffc5c3804a5825ed249f7df9c3da31e2081145c8749a/diff:/var/lib/docker/overlay2/a6fe8ece465ba51837f6a88e28c3b571b632f0b223900278ac4a5f5dc0577520/diff:/var/lib/docker/overlay2/ee3e9af6d65fe9d2da423711b90ee171fd35422619c22b802d5fead4f861d921/diff:/var/lib/docker/overlay2/b353b985af8b2f665218f5af5e89cb642745824e2c3b51bfe3aa58c801823c46/diff:/var/lib/docker/overlay2/4411168ee372991c59d386d2ec200449c718a5343f5efa545ad9552a5c349310/diff:/var/lib/docker/overlay2/eeb668637d75a5802fe62d8a71458c68195302676ff09eb1e973d633e24e8588/diff:/var/lib/docker/overlay2/67b1dd580c0c0e994c4fe1233fef817d2c085438c80485c1f2eec64392c7b709/diff:/var/lib/docker/overlay2/1ae992d82b2e0a4c2a667c7d0d9e243efda7ee206e17c862bf093fa976667cc3/diff:/var/lib/docker/overlay2/ab6d393733a7abd2a9bd5612a0cef5adc3cded30c596c212828a8475c9c29779/diff:/var/lib/docker/overlay2/c927272ea82dc6bb318adcf8eb94099eece7af
9df7f454ff921048ba7ce589d2/diff:/var/lib/docker/overlay2/722309d1402eda210190af6c69b6f9998aff66e78e5bbc972ae865d10f0474d7/diff:/var/lib/docker/overlay2/c8a4e498ea2b5c051ced01db75d10e4ed1619bd3acc28c000789b600f8a7e23b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-030235",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-030235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-030235",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b393e8cc2ca9791c036b47e94c66bf02358a9f3b2722a25ce7542ec7cb04d83c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53832"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53833"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53834"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53836"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b393e8cc2ca9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-030235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d977528adcbe",
	                        "old-k8s-version-030235"
	                    ],
	                    "NetworkID": "ab958c8662819925836c350f1443c8060424291379d9dc2b6c89656fa5f7da2a",
	                    "EndpointID": "a44f0767487daa905fe61590638004cbf71d83c3b900131448d936455bc13f58",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235: exit status 6 (404.01889ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 03:06:48.293684   17618 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-030235" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-030235" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-030235
helpers_test.go:235: (dbg) docker inspect old-k8s-version-030235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942",
	        "Created": "2023-01-14T11:02:44.471910321Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256371,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T11:02:44.850584543Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hostname",
	        "HostsPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hosts",
	        "LogPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942-json.log",
	        "Name": "/old-k8s-version-030235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-030235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-030235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408-init/diff:/var/lib/docker/overlay2/74c9e0d36b5b0c73e7df7f4bce3bd0c3d02cf9dc383bffd6fbcff44769e0e62a/diff:/var/lib/docker/overlay2/ba601a6c163e2d067928a6364b090a9785c3dd2470d90823ce10e62a47aa569f/diff:/var/lib/docker/overlay2/80b54fffffd853e7ba8f14b1c1ac90a8b75fb31aafab2d53fe628cb592a95844/diff:/var/lib/docker/overlay2/02213d03e53450db4a2d492831eba720749d97435157430d240b760477b64c78/diff:/var/lib/docker/overlay2/e3727b5662aa5fdeeef9053112ad90fb2f9aaecbfeeddefa3efb066881ae1677/diff:/var/lib/docker/overlay2/685adc0695be0cb9862d43898ceae6e6a36c3cc98f04bc25e314797bed3b1d95/diff:/var/lib/docker/overlay2/7e133e132419c5ad6565f89b3ecfdf2c9fa038e5b9c39fe81c1269cfb6bb0d22/diff:/var/lib/docker/overlay2/c4d27ebf7e050a3aee0acccdadb92fc9390befadef2b0b13b9ebe87a2af3ef50/diff:/var/lib/docker/overlay2/0f07a86eba9c199451031724816d33cb5d2e19c401514edd8c1e392fd795f1e1/diff:/var/lib/docker/overlay2/a51cfe
8ee6145a30d356888e940bfdda67bc55c29f3972b35ae93dd989943b1c/diff:/var/lib/docker/overlay2/b155ac1a426201afe2af9fba8a7ebbecd3d8271f8613d0f53dac7bb190bc977f/diff:/var/lib/docker/overlay2/7c5cec64dde89a12b95bb1a0bca411b06b69201cfdb3cc4b46cb87a5bcff9a7f/diff:/var/lib/docker/overlay2/dd54bb055fc70a41daa3f3e950f4bdadd925db2c588d7d831edb4cbb176d30c7/diff:/var/lib/docker/overlay2/f58b39c756189e32d5b9c66b5c3861eabf5ab01ebc6179fec7210d414762bf45/diff:/var/lib/docker/overlay2/6458e00e4b79399a4860e78a572cd21fd47cbca2a54d189f34bd4a438145a6f5/diff:/var/lib/docker/overlay2/66427e9f49ff5383f9f819513857efb87ee3f880df33a86ac46ebc140ff172ed/diff:/var/lib/docker/overlay2/33f03d40d23c6a829c43633ba96c4058fbf09a4cf912eb51e0ca23a65574b0a7/diff:/var/lib/docker/overlay2/e68584e2b5a5a18fbd6edeeba6d80fe43e2199775b520878ca842d463078a2d1/diff:/var/lib/docker/overlay2/a2bfe134a89cb821f2c8e5ec6b42888d30fac6a9ed1aa4853476bb33cfe2e157/diff:/var/lib/docker/overlay2/f55951d7e041b300f9842916d51648285b79860a132d032d3c23b80af7c280fa/diff:/var/lib/d
ocker/overlay2/76cb0b8d6987165c472c0c9d54491045539294d203577a4ed7fac7f7cbbf0322/diff:/var/lib/docker/overlay2/a8f6d057d4938258302dd54e9a2e99732b4a2ac5c869366e93983e3e8890d432/diff:/var/lib/docker/overlay2/16bf4a461f9fe0edba90225f752527e534469b1bfbeb5bca6315512786340bfe/diff:/var/lib/docker/overlay2/2d022a51ddd598853537ff8fbeca5b94beff9d5d7e6ca81ffe011aa35121268a/diff:/var/lib/docker/overlay2/e30d56ebfba93be441f305b1938dd2d0f847f649922524ebef1fbe3e4b3b4bf9/diff:/var/lib/docker/overlay2/12df07bd2576a7b97f383aa3fcb2535f75a901953859063d9b65944d2dd0b152/diff:/var/lib/docker/overlay2/79e70748fe1267851a900b8bca2ab4e0b34e8163714fc440602d9e0273c93421/diff:/var/lib/docker/overlay2/c4fa6441d4ff7ce1be2072a8f61c5c495ff1785d9fee891191262b893a6eff63/diff:/var/lib/docker/overlay2/748980353d2fab0e6498a85b0c558d9eb7f34703302b21298c310b98dcf4d6f9/diff:/var/lib/docker/overlay2/48f823bc2f4741841d95ac4706f52fe9d01883bce998d5c999bdc363c838b1ee/diff:/var/lib/docker/overlay2/5f4f42c0e92359fc7ea2cf540120bd09407fd1d8dee5b56896919b39d3e
70033/diff:/var/lib/docker/overlay2/4a4066d1d0f42bb48af787d9f9bd115bacffde91f4ca8c20648dad3b25f904b6/diff:/var/lib/docker/overlay2/5f1054f553934c922e4dffc5c3804a5825ed249f7df9c3da31e2081145c8749a/diff:/var/lib/docker/overlay2/a6fe8ece465ba51837f6a88e28c3b571b632f0b223900278ac4a5f5dc0577520/diff:/var/lib/docker/overlay2/ee3e9af6d65fe9d2da423711b90ee171fd35422619c22b802d5fead4f861d921/diff:/var/lib/docker/overlay2/b353b985af8b2f665218f5af5e89cb642745824e2c3b51bfe3aa58c801823c46/diff:/var/lib/docker/overlay2/4411168ee372991c59d386d2ec200449c718a5343f5efa545ad9552a5c349310/diff:/var/lib/docker/overlay2/eeb668637d75a5802fe62d8a71458c68195302676ff09eb1e973d633e24e8588/diff:/var/lib/docker/overlay2/67b1dd580c0c0e994c4fe1233fef817d2c085438c80485c1f2eec64392c7b709/diff:/var/lib/docker/overlay2/1ae992d82b2e0a4c2a667c7d0d9e243efda7ee206e17c862bf093fa976667cc3/diff:/var/lib/docker/overlay2/ab6d393733a7abd2a9bd5612a0cef5adc3cded30c596c212828a8475c9c29779/diff:/var/lib/docker/overlay2/c927272ea82dc6bb318adcf8eb94099eece7af
9df7f454ff921048ba7ce589d2/diff:/var/lib/docker/overlay2/722309d1402eda210190af6c69b6f9998aff66e78e5bbc972ae865d10f0474d7/diff:/var/lib/docker/overlay2/c8a4e498ea2b5c051ced01db75d10e4ed1619bd3acc28c000789b600f8a7e23b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-030235",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-030235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-030235",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b393e8cc2ca9791c036b47e94c66bf02358a9f3b2722a25ce7542ec7cb04d83c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53832"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53833"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53834"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53836"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b393e8cc2ca9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-030235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d977528adcbe",
	                        "old-k8s-version-030235"
	                    ],
	                    "NetworkID": "ab958c8662819925836c350f1443c8060424291379d9dc2b6c89656fa5f7da2a",
	                    "EndpointID": "a44f0767487daa905fe61590638004cbf71d83c3b900131448d936455bc13f58",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235: exit status 6 (399.578917ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 03:06:48.752997   17630 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-030235" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-030235" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-030235 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0114 03:07:07.567373    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:07.573735    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:07.584008    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:07.606161    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:07.647229    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:07.727368    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:07.889513    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:08.209841    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:08.851779    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:10.134104    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:11.845956    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:11.851098    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:11.862071    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:11.882249    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:11.922587    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:12.003482    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:12.163636    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:12.484482    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:12.695055    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:13.124788    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:14.405589    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:16.773527    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:07:16.967934    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:17.817354    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:22.088407    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:28.059401    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:32.330233    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:46.605082    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:07:48.539847    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:07:52.812580    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:07:58.404989    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
E0114 03:08:14.291188    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
E0114 03:08:15.040338    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:08:15.046803    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:08:15.058630    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:08:15.079259    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:08:15.119663    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:08:15.200126    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:08:15.360315    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:08:15.681135    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:08:16.321350    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:08:17.601668    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-030235 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.176403914s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-030235 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-030235 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-030235 describe deploy/metrics-server -n kube-system: exit status 1 (36.683851ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-030235" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-030235 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-030235
helpers_test.go:235: (dbg) docker inspect old-k8s-version-030235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942",
	        "Created": "2023-01-14T11:02:44.471910321Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256371,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T11:02:44.850584543Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hostname",
	        "HostsPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hosts",
	        "LogPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942-json.log",
	        "Name": "/old-k8s-version-030235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-030235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-030235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408-init/diff:/var/lib/docker/overlay2/74c9e0d36b5b0c73e7df7f4bce3bd0c3d02cf9dc383bffd6fbcff44769e0e62a/diff:/var/lib/docker/overlay2/ba601a6c163e2d067928a6364b090a9785c3dd2470d90823ce10e62a47aa569f/diff:/var/lib/docker/overlay2/80b54fffffd853e7ba8f14b1c1ac90a8b75fb31aafab2d53fe628cb592a95844/diff:/var/lib/docker/overlay2/02213d03e53450db4a2d492831eba720749d97435157430d240b760477b64c78/diff:/var/lib/docker/overlay2/e3727b5662aa5fdeeef9053112ad90fb2f9aaecbfeeddefa3efb066881ae1677/diff:/var/lib/docker/overlay2/685adc0695be0cb9862d43898ceae6e6a36c3cc98f04bc25e314797bed3b1d95/diff:/var/lib/docker/overlay2/7e133e132419c5ad6565f89b3ecfdf2c9fa038e5b9c39fe81c1269cfb6bb0d22/diff:/var/lib/docker/overlay2/c4d27ebf7e050a3aee0acccdadb92fc9390befadef2b0b13b9ebe87a2af3ef50/diff:/var/lib/docker/overlay2/0f07a86eba9c199451031724816d33cb5d2e19c401514edd8c1e392fd795f1e1/diff:/var/lib/docker/overlay2/a51cfe
8ee6145a30d356888e940bfdda67bc55c29f3972b35ae93dd989943b1c/diff:/var/lib/docker/overlay2/b155ac1a426201afe2af9fba8a7ebbecd3d8271f8613d0f53dac7bb190bc977f/diff:/var/lib/docker/overlay2/7c5cec64dde89a12b95bb1a0bca411b06b69201cfdb3cc4b46cb87a5bcff9a7f/diff:/var/lib/docker/overlay2/dd54bb055fc70a41daa3f3e950f4bdadd925db2c588d7d831edb4cbb176d30c7/diff:/var/lib/docker/overlay2/f58b39c756189e32d5b9c66b5c3861eabf5ab01ebc6179fec7210d414762bf45/diff:/var/lib/docker/overlay2/6458e00e4b79399a4860e78a572cd21fd47cbca2a54d189f34bd4a438145a6f5/diff:/var/lib/docker/overlay2/66427e9f49ff5383f9f819513857efb87ee3f880df33a86ac46ebc140ff172ed/diff:/var/lib/docker/overlay2/33f03d40d23c6a829c43633ba96c4058fbf09a4cf912eb51e0ca23a65574b0a7/diff:/var/lib/docker/overlay2/e68584e2b5a5a18fbd6edeeba6d80fe43e2199775b520878ca842d463078a2d1/diff:/var/lib/docker/overlay2/a2bfe134a89cb821f2c8e5ec6b42888d30fac6a9ed1aa4853476bb33cfe2e157/diff:/var/lib/docker/overlay2/f55951d7e041b300f9842916d51648285b79860a132d032d3c23b80af7c280fa/diff:/var/lib/d
ocker/overlay2/76cb0b8d6987165c472c0c9d54491045539294d203577a4ed7fac7f7cbbf0322/diff:/var/lib/docker/overlay2/a8f6d057d4938258302dd54e9a2e99732b4a2ac5c869366e93983e3e8890d432/diff:/var/lib/docker/overlay2/16bf4a461f9fe0edba90225f752527e534469b1bfbeb5bca6315512786340bfe/diff:/var/lib/docker/overlay2/2d022a51ddd598853537ff8fbeca5b94beff9d5d7e6ca81ffe011aa35121268a/diff:/var/lib/docker/overlay2/e30d56ebfba93be441f305b1938dd2d0f847f649922524ebef1fbe3e4b3b4bf9/diff:/var/lib/docker/overlay2/12df07bd2576a7b97f383aa3fcb2535f75a901953859063d9b65944d2dd0b152/diff:/var/lib/docker/overlay2/79e70748fe1267851a900b8bca2ab4e0b34e8163714fc440602d9e0273c93421/diff:/var/lib/docker/overlay2/c4fa6441d4ff7ce1be2072a8f61c5c495ff1785d9fee891191262b893a6eff63/diff:/var/lib/docker/overlay2/748980353d2fab0e6498a85b0c558d9eb7f34703302b21298c310b98dcf4d6f9/diff:/var/lib/docker/overlay2/48f823bc2f4741841d95ac4706f52fe9d01883bce998d5c999bdc363c838b1ee/diff:/var/lib/docker/overlay2/5f4f42c0e92359fc7ea2cf540120bd09407fd1d8dee5b56896919b39d3e
70033/diff:/var/lib/docker/overlay2/4a4066d1d0f42bb48af787d9f9bd115bacffde91f4ca8c20648dad3b25f904b6/diff:/var/lib/docker/overlay2/5f1054f553934c922e4dffc5c3804a5825ed249f7df9c3da31e2081145c8749a/diff:/var/lib/docker/overlay2/a6fe8ece465ba51837f6a88e28c3b571b632f0b223900278ac4a5f5dc0577520/diff:/var/lib/docker/overlay2/ee3e9af6d65fe9d2da423711b90ee171fd35422619c22b802d5fead4f861d921/diff:/var/lib/docker/overlay2/b353b985af8b2f665218f5af5e89cb642745824e2c3b51bfe3aa58c801823c46/diff:/var/lib/docker/overlay2/4411168ee372991c59d386d2ec200449c718a5343f5efa545ad9552a5c349310/diff:/var/lib/docker/overlay2/eeb668637d75a5802fe62d8a71458c68195302676ff09eb1e973d633e24e8588/diff:/var/lib/docker/overlay2/67b1dd580c0c0e994c4fe1233fef817d2c085438c80485c1f2eec64392c7b709/diff:/var/lib/docker/overlay2/1ae992d82b2e0a4c2a667c7d0d9e243efda7ee206e17c862bf093fa976667cc3/diff:/var/lib/docker/overlay2/ab6d393733a7abd2a9bd5612a0cef5adc3cded30c596c212828a8475c9c29779/diff:/var/lib/docker/overlay2/c927272ea82dc6bb318adcf8eb94099eece7af
9df7f454ff921048ba7ce589d2/diff:/var/lib/docker/overlay2/722309d1402eda210190af6c69b6f9998aff66e78e5bbc972ae865d10f0474d7/diff:/var/lib/docker/overlay2/c8a4e498ea2b5c051ced01db75d10e4ed1619bd3acc28c000789b600f8a7e23b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-030235",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-030235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-030235",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b393e8cc2ca9791c036b47e94c66bf02358a9f3b2722a25ce7542ec7cb04d83c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53832"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53833"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53834"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "53836"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b393e8cc2ca9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-030235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d977528adcbe",
	                        "old-k8s-version-030235"
	                    ],
	                    "NetworkID": "ab958c8662819925836c350f1443c8060424291379d9dc2b6c89656fa5f7da2a",
	                    "EndpointID": "a44f0767487daa905fe61590638004cbf71d83c3b900131448d936455bc13f58",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235: exit status 6 (399.564725ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 03:08:18.426070   17767 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-030235" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-030235" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (489.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-030235 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0114 03:08:25.285371    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:08:29.500825    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:08:33.773265    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:08:35.525850    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:08:38.694532    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:08:56.007668    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:08:59.153510    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 03:09:00.515357    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:09:02.896896    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 03:09:19.836465    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 03:09:27.549037    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 03:09:28.241921    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:09:36.969297    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:09:51.421646    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:09:55.694038    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:10:41.613246    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-030235 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m5.343458709s)

                                                
                                                
-- stdout --
	* [old-k8s-version-030235] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-030235 in cluster old-k8s-version-030235
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-030235" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.21 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 03:08:20.490632   17797 out.go:296] Setting OutFile to fd 1 ...
	I0114 03:08:20.490900   17797 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:08:20.490907   17797 out.go:309] Setting ErrFile to fd 2...
	I0114 03:08:20.490911   17797 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:08:20.491020   17797 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 03:08:20.491530   17797 out.go:303] Setting JSON to false
	I0114 03:08:20.510445   17797 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4074,"bootTime":1673690426,"procs":390,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 03:08:20.510563   17797 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 03:08:20.531993   17797 out.go:177] * [old-k8s-version-030235] minikube v1.28.0 on Darwin 13.0.1
	I0114 03:08:20.574893   17797 notify.go:220] Checking for updates...
	I0114 03:08:20.595626   17797 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 03:08:20.637815   17797 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 03:08:20.701795   17797 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 03:08:20.722716   17797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 03:08:20.743959   17797 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	I0114 03:08:20.766599   17797 config.go:180] Loaded profile config "old-k8s-version-030235": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0114 03:08:20.788667   17797 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I0114 03:08:20.810058   17797 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 03:08:20.872535   17797 docker.go:138] docker version: linux-20.10.21
	I0114 03:08:20.872685   17797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 03:08:21.013921   17797 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 11:08:20.92278229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 03:08:21.056292   17797 out.go:177] * Using the docker driver based on existing profile
	I0114 03:08:21.077510   17797 start.go:294] selected driver: docker
	I0114 03:08:21.077524   17797 start.go:838] validating driver "docker" against &{Name:old-k8s-version-030235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-030235 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:08:21.077609   17797 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 03:08:21.080080   17797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 03:08:21.223939   17797 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 11:08:21.132352183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 03:08:21.224100   17797 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 03:08:21.224120   17797 cni.go:95] Creating CNI manager for ""
	I0114 03:08:21.224130   17797 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:08:21.224142   17797 start_flags.go:319] config:
	{Name:old-k8s-version-030235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-030235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:08:21.246251   17797 out.go:177] * Starting control plane node old-k8s-version-030235 in cluster old-k8s-version-030235
	I0114 03:08:21.267807   17797 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 03:08:21.288892   17797 out.go:177] * Pulling base image ...
	I0114 03:08:21.330820   17797 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 03:08:21.330890   17797 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 03:08:21.330920   17797 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0114 03:08:21.330934   17797 cache.go:57] Caching tarball of preloaded images
	I0114 03:08:21.331166   17797 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 03:08:21.331188   17797 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0114 03:08:21.331981   17797 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/config.json ...
	I0114 03:08:21.387265   17797 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 03:08:21.387280   17797 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 03:08:21.387302   17797 cache.go:193] Successfully downloaded all kic artifacts
	I0114 03:08:21.387345   17797 start.go:364] acquiring machines lock for old-k8s-version-030235: {Name:mk0a4f570c8f2752e6db1ad5a8ffefc98930515a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 03:08:21.387436   17797 start.go:368] acquired machines lock for "old-k8s-version-030235" in 72.618µs
	I0114 03:08:21.387461   17797 start.go:96] Skipping create...Using existing machine configuration
	I0114 03:08:21.387472   17797 fix.go:55] fixHost starting: 
	I0114 03:08:21.387730   17797 cli_runner.go:164] Run: docker container inspect old-k8s-version-030235 --format={{.State.Status}}
	I0114 03:08:21.445500   17797 fix.go:103] recreateIfNeeded on old-k8s-version-030235: state=Stopped err=<nil>
	W0114 03:08:21.445531   17797 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 03:08:21.489235   17797 out.go:177] * Restarting existing docker container for "old-k8s-version-030235" ...
	I0114 03:08:21.511149   17797 cli_runner.go:164] Run: docker start old-k8s-version-030235
	I0114 03:08:21.845877   17797 cli_runner.go:164] Run: docker container inspect old-k8s-version-030235 --format={{.State.Status}}
	I0114 03:08:21.907099   17797 kic.go:426] container "old-k8s-version-030235" state is running.
	I0114 03:08:21.907723   17797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-030235
	I0114 03:08:21.972829   17797 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/config.json ...
	I0114 03:08:21.973426   17797 machine.go:88] provisioning docker machine ...
	I0114 03:08:21.973485   17797 ubuntu.go:169] provisioning hostname "old-k8s-version-030235"
	I0114 03:08:21.973600   17797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:08:22.038845   17797 main.go:134] libmachine: Using SSH client type: native
	I0114 03:08:22.039037   17797 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54079 <nil> <nil>}
	I0114 03:08:22.039049   17797 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-030235 && echo "old-k8s-version-030235" | sudo tee /etc/hostname
	I0114 03:08:22.163927   17797 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-030235
	
	I0114 03:08:22.164066   17797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:08:22.225404   17797 main.go:134] libmachine: Using SSH client type: native
	I0114 03:08:22.225605   17797 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54079 <nil> <nil>}
	I0114 03:08:22.225620   17797 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-030235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-030235/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-030235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 03:08:22.344550   17797 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 03:08:22.344571   17797 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
	I0114 03:08:22.344622   17797 ubuntu.go:177] setting up certificates
	I0114 03:08:22.344631   17797 provision.go:83] configureAuth start
	I0114 03:08:22.344715   17797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-030235
	I0114 03:08:22.401774   17797 provision.go:138] copyHostCerts
	I0114 03:08:22.401874   17797 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
	I0114 03:08:22.401883   17797 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 03:08:22.401990   17797 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
	I0114 03:08:22.402200   17797 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
	I0114 03:08:22.402207   17797 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 03:08:22.402271   17797 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
	I0114 03:08:22.402420   17797 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
	I0114 03:08:22.402426   17797 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 03:08:22.402489   17797 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
	I0114 03:08:22.402645   17797 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-030235 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-030235]
	I0114 03:08:22.611683   17797 provision.go:172] copyRemoteCerts
	I0114 03:08:22.611750   17797 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 03:08:22.611810   17797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:08:22.670688   17797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54079 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/old-k8s-version-030235/id_rsa Username:docker}
	I0114 03:08:22.756653   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0114 03:08:22.773953   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0114 03:08:22.791307   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 03:08:22.808650   17797 provision.go:86] duration metric: configureAuth took 464.005029ms
	I0114 03:08:22.808664   17797 ubuntu.go:193] setting minikube options for container-runtime
	I0114 03:08:22.808838   17797 config.go:180] Loaded profile config "old-k8s-version-030235": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0114 03:08:22.808911   17797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:08:22.866306   17797 main.go:134] libmachine: Using SSH client type: native
	I0114 03:08:22.866468   17797 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54079 <nil> <nil>}
	I0114 03:08:22.866478   17797 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 03:08:22.983782   17797 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 03:08:22.983802   17797 ubuntu.go:71] root file system type: overlay
	I0114 03:08:22.983951   17797 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 03:08:22.984048   17797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:08:23.043563   17797 main.go:134] libmachine: Using SSH client type: native
	I0114 03:08:23.043722   17797 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54079 <nil> <nil>}
	I0114 03:08:23.043768   17797 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 03:08:23.170666   17797 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 03:08:23.170765   17797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:08:23.230502   17797 main.go:134] libmachine: Using SSH client type: native
	I0114 03:08:23.230656   17797 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54079 <nil> <nil>}
	I0114 03:08:23.230671   17797 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 03:08:23.351522   17797 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 03:08:23.351537   17797 machine.go:91] provisioned docker machine in 1.378093225s
	I0114 03:08:23.351547   17797 start.go:300] post-start starting for "old-k8s-version-030235" (driver="docker")
	I0114 03:08:23.351553   17797 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 03:08:23.351621   17797 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 03:08:23.351698   17797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:08:23.410352   17797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54079 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/old-k8s-version-030235/id_rsa Username:docker}
	I0114 03:08:23.495494   17797 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 03:08:23.499134   17797 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 03:08:23.499152   17797 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 03:08:23.499159   17797 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 03:08:23.499166   17797 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 03:08:23.499174   17797 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
	I0114 03:08:23.499269   17797 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
	I0114 03:08:23.499430   17797 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
	I0114 03:08:23.499619   17797 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 03:08:23.507201   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
	I0114 03:08:23.524339   17797 start.go:303] post-start completed in 172.778677ms
	I0114 03:08:23.524431   17797 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 03:08:23.524501   17797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:08:23.582061   17797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54079 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/old-k8s-version-030235/id_rsa Username:docker}
	I0114 03:08:23.664823   17797 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 03:08:23.669253   17797 fix.go:57] fixHost completed within 2.28176726s
	I0114 03:08:23.669264   17797 start.go:83] releasing machines lock for "old-k8s-version-030235", held for 2.281805334s
	I0114 03:08:23.669370   17797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-030235
	I0114 03:08:23.727373   17797 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0114 03:08:23.727373   17797 ssh_runner.go:195] Run: cat /version.json
	I0114 03:08:23.727485   17797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:08:23.727485   17797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:08:23.788263   17797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54079 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/old-k8s-version-030235/id_rsa Username:docker}
	I0114 03:08:23.788771   17797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54079 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/old-k8s-version-030235/id_rsa Username:docker}
	I0114 03:08:23.871460   17797 ssh_runner.go:195] Run: systemctl --version
	I0114 03:08:24.150823   17797 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 03:08:24.160949   17797 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 03:08:24.161023   17797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 03:08:24.172871   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 03:08:24.185857   17797 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 03:08:24.253589   17797 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 03:08:24.323764   17797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 03:08:24.391097   17797 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 03:08:24.597082   17797 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 03:08:24.627717   17797 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 03:08:24.701569   17797 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.21 ...
	I0114 03:08:24.701837   17797 cli_runner.go:164] Run: docker exec -t old-k8s-version-030235 dig +short host.docker.internal
	I0114 03:08:24.813627   17797 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 03:08:24.813746   17797 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 03:08:24.818030   17797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 03:08:24.827992   17797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:08:24.885767   17797 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 03:08:24.885856   17797 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 03:08:24.910585   17797 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0114 03:08:24.910608   17797 docker.go:543] Images already preloaded, skipping extraction
	I0114 03:08:24.910699   17797 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 03:08:24.933890   17797 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0114 03:08:24.933908   17797 cache_images.go:84] Images are preloaded, skipping loading
	I0114 03:08:24.934000   17797 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 03:08:25.005334   17797 cni.go:95] Creating CNI manager for ""
	I0114 03:08:25.005353   17797 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:08:25.005374   17797 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 03:08:25.005389   17797 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-030235 NodeName:old-k8s-version-030235 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 03:08:25.005497   17797 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-030235"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-030235
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 03:08:25.005575   17797 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-030235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-030235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 03:08:25.005646   17797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0114 03:08:25.013476   17797 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 03:08:25.013539   17797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 03:08:25.020864   17797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0114 03:08:25.033620   17797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 03:08:25.046343   17797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0114 03:08:25.059391   17797 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0114 03:08:25.063293   17797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 03:08:25.073273   17797 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235 for IP: 192.168.76.2
	I0114 03:08:25.073418   17797 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
	I0114 03:08:25.073480   17797 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
	I0114 03:08:25.073583   17797 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/client.key
	I0114 03:08:25.073665   17797 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.key.31bdca25
	I0114 03:08:25.073732   17797 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/proxy-client.key
	I0114 03:08:25.073964   17797 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
	W0114 03:08:25.074002   17797 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
	I0114 03:08:25.074014   17797 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 03:08:25.074060   17797 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
	I0114 03:08:25.074101   17797 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
	I0114 03:08:25.074138   17797 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
	I0114 03:08:25.074218   17797 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
	I0114 03:08:25.074840   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 03:08:25.092319   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0114 03:08:25.109914   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 03:08:25.127291   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/old-k8s-version-030235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0114 03:08:25.144694   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 03:08:25.161960   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 03:08:25.179477   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 03:08:25.196625   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 03:08:25.213904   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
	I0114 03:08:25.232885   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 03:08:25.250558   17797 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
	I0114 03:08:25.268218   17797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 03:08:25.281394   17797 ssh_runner.go:195] Run: openssl version
	I0114 03:08:25.286977   17797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
	I0114 03:08:25.295355   17797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
	I0114 03:08:25.299230   17797 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 03:08:25.299277   17797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
	I0114 03:08:25.304707   17797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 03:08:25.312363   17797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 03:08:25.320507   17797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:08:25.324636   17797 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:08:25.324700   17797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:08:25.330069   17797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 03:08:25.337652   17797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
	I0114 03:08:25.346176   17797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
	I0114 03:08:25.351054   17797 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 03:08:25.351121   17797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
	I0114 03:08:25.356782   17797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
	I0114 03:08:25.364977   17797 kubeadm.go:396] StartCluster: {Name:old-k8s-version-030235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-030235 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:08:25.365116   17797 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 03:08:25.388078   17797 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 03:08:25.396014   17797 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 03:08:25.396026   17797 kubeadm.go:627] restartCluster start
	I0114 03:08:25.396083   17797 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 03:08:25.403073   17797 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:25.403157   17797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-030235
	I0114 03:08:25.461416   17797 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-030235" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 03:08:25.461570   17797 kubeconfig.go:146] "old-k8s-version-030235" context is missing from /Users/jenkins/minikube-integration/15642-1559/kubeconfig - will repair!
	I0114 03:08:25.461905   17797 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/kubeconfig: {Name:mkb6d1db5780815291441dc67b348461b9325651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:08:25.463336   17797 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 03:08:25.471446   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:25.471517   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:25.480185   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:25.681764   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:25.681899   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:25.692915   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:25.882311   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:25.882471   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:25.893597   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:26.080855   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:26.080995   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:26.091913   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:26.281347   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:26.281532   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:26.292525   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:26.480344   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:26.480461   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:26.491212   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:26.681056   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:26.681195   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:26.691225   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:26.882264   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:26.882388   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:26.893533   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:27.081263   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:27.081410   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:27.092293   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:27.281499   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:27.281636   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:27.292718   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:27.481067   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:27.481200   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:27.492159   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:27.681456   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:27.681549   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:27.691311   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:27.880695   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:27.880845   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:27.891550   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:28.080846   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:28.080978   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:28.092178   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:28.281169   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:28.281326   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:28.292536   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:28.480855   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:28.481024   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:28.492405   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:28.492416   17797 api_server.go:165] Checking apiserver status ...
	I0114 03:08:28.492492   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:08:28.501198   17797 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:08:28.501211   17797 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0114 03:08:28.501220   17797 kubeadm.go:1114] stopping kube-system containers ...
	I0114 03:08:28.501302   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 03:08:28.524204   17797 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 03:08:28.534795   17797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 03:08:28.542653   17797 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Jan 14 11:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Jan 14 11:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Jan 14 11:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Jan 14 11:04 /etc/kubernetes/scheduler.conf
	
	I0114 03:08:28.542724   17797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 03:08:28.550635   17797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 03:08:28.559525   17797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 03:08:28.567363   17797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 03:08:28.575217   17797 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 03:08:28.583005   17797 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 03:08:28.583016   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:08:28.635765   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:08:29.213690   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:08:29.429844   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:08:29.487665   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:08:29.569426   17797 api_server.go:51] waiting for apiserver process to appear ...
	I0114 03:08:29.569496   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:30.079936   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:30.580632   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:31.079653   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:31.578720   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:32.078878   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:32.579165   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:33.078938   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:33.579300   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:34.078833   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:34.579071   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:35.079268   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:35.579145   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:36.078758   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:36.578690   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:37.079290   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:37.578675   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:38.079263   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:38.579917   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:39.080699   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:39.579348   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:40.079910   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:40.578762   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:41.079015   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:41.579790   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:42.078771   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:42.579623   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:43.080741   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:43.578928   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:44.078770   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:44.580150   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:45.078982   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:45.578811   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:46.079953   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:46.578914   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:47.079149   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:47.579580   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:48.079166   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:48.579133   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:49.079064   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:49.578898   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:50.079543   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:50.579062   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:51.078886   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:51.579083   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:52.080007   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:52.579676   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:53.078836   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:53.578987   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:54.080823   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:54.579373   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:55.078759   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:55.578915   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:56.078855   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:56.579099   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:57.078821   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:57.579603   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:58.079679   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:58.580873   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:59.078856   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:08:59.579125   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:00.078929   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:00.579156   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:01.080043   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:01.580109   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:02.079104   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:02.579038   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:03.079103   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:03.578886   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:04.078916   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:04.579036   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:05.079582   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:05.580344   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:06.079594   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:06.578932   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:07.079300   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:07.579249   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:08.079749   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:08.578837   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:09.080952   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:09.579038   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:10.079153   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:10.579026   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:11.079010   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:11.580074   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:12.078943   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:12.578937   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:13.079549   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:13.578894   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:14.079291   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:14.579447   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:15.079134   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:15.580030   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:16.079118   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:16.580292   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:17.079526   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:17.580016   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:18.079787   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:18.579508   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:19.079078   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:19.579734   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:20.079116   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:20.579030   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:21.078984   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:21.579019   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:22.079129   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:22.580766   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:23.079008   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:23.579035   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:24.080168   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:24.581095   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:25.080208   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:25.580931   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:26.079005   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:26.579201   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:27.079691   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:27.579105   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:28.079005   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:28.579047   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:29.079964   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:29.579391   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:09:29.603842   17797 logs.go:274] 0 containers: []
	W0114 03:09:29.603855   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:09:29.603941   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:09:29.627482   17797 logs.go:274] 0 containers: []
	W0114 03:09:29.627495   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:09:29.627579   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:09:29.651546   17797 logs.go:274] 0 containers: []
	W0114 03:09:29.651560   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:09:29.651654   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:09:29.674148   17797 logs.go:274] 0 containers: []
	W0114 03:09:29.674161   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:09:29.674244   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:09:29.696983   17797 logs.go:274] 0 containers: []
	W0114 03:09:29.696997   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:09:29.697084   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:09:29.719478   17797 logs.go:274] 0 containers: []
	W0114 03:09:29.719491   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:09:29.719579   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:09:29.742958   17797 logs.go:274] 0 containers: []
	W0114 03:09:29.742972   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:09:29.743056   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:09:29.765525   17797 logs.go:274] 0 containers: []
	W0114 03:09:29.765540   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:09:29.765547   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:09:29.765554   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:09:29.777433   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:09:29.777448   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:09:29.831005   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:09:29.831021   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:09:29.831027   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:09:29.845245   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:09:29.845259   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:09:31.894709   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049421434s)
	I0114 03:09:31.894852   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:09:31.894859   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:09:34.433489   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:34.579190   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:09:34.602763   17797 logs.go:274] 0 containers: []
	W0114 03:09:34.602782   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:09:34.602882   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:09:34.625321   17797 logs.go:274] 0 containers: []
	W0114 03:09:34.625333   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:09:34.625422   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:09:34.648490   17797 logs.go:274] 0 containers: []
	W0114 03:09:34.648502   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:09:34.648587   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:09:34.671421   17797 logs.go:274] 0 containers: []
	W0114 03:09:34.671436   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:09:34.671521   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:09:34.694653   17797 logs.go:274] 0 containers: []
	W0114 03:09:34.694668   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:09:34.694757   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:09:34.736508   17797 logs.go:274] 0 containers: []
	W0114 03:09:34.736521   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:09:34.736609   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:09:34.759898   17797 logs.go:274] 0 containers: []
	W0114 03:09:34.759910   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:09:34.759992   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:09:34.782335   17797 logs.go:274] 0 containers: []
	W0114 03:09:34.782347   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:09:34.782357   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:09:34.782364   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:09:34.819747   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:09:34.819761   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:09:34.832036   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:09:34.832049   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:09:34.885184   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:09:34.885196   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:09:34.885205   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:09:34.899329   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:09:34.899340   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:09:36.947798   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048432522s)
	I0114 03:09:39.449436   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:39.580516   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:09:39.605454   17797 logs.go:274] 0 containers: []
	W0114 03:09:39.605467   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:09:39.605554   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:09:39.628819   17797 logs.go:274] 0 containers: []
	W0114 03:09:39.628832   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:09:39.628924   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:09:39.652532   17797 logs.go:274] 0 containers: []
	W0114 03:09:39.652545   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:09:39.652636   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:09:39.676338   17797 logs.go:274] 0 containers: []
	W0114 03:09:39.676351   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:09:39.676436   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:09:39.700315   17797 logs.go:274] 0 containers: []
	W0114 03:09:39.700328   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:09:39.700414   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:09:39.723013   17797 logs.go:274] 0 containers: []
	W0114 03:09:39.723027   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:09:39.723111   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:09:39.745774   17797 logs.go:274] 0 containers: []
	W0114 03:09:39.745790   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:09:39.745878   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:09:39.768976   17797 logs.go:274] 0 containers: []
	W0114 03:09:39.768988   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:09:39.768996   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:09:39.769001   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:09:39.783003   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:09:39.783017   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:09:41.832862   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049816956s)
	I0114 03:09:41.832976   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:09:41.832983   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:09:41.870743   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:09:41.870758   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:09:41.882878   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:09:41.882896   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:09:41.937378   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:09:44.437645   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:44.579374   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:09:44.604953   17797 logs.go:274] 0 containers: []
	W0114 03:09:44.604965   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:09:44.605049   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:09:44.628405   17797 logs.go:274] 0 containers: []
	W0114 03:09:44.628418   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:09:44.628503   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:09:44.652051   17797 logs.go:274] 0 containers: []
	W0114 03:09:44.652068   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:09:44.652138   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:09:44.675591   17797 logs.go:274] 0 containers: []
	W0114 03:09:44.675604   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:09:44.675687   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:09:44.699272   17797 logs.go:274] 0 containers: []
	W0114 03:09:44.699284   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:09:44.699376   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:09:44.724260   17797 logs.go:274] 0 containers: []
	W0114 03:09:44.724274   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:09:44.724356   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:09:44.747317   17797 logs.go:274] 0 containers: []
	W0114 03:09:44.747331   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:09:44.747419   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:09:44.770758   17797 logs.go:274] 0 containers: []
	W0114 03:09:44.770773   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:09:44.770780   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:09:44.770787   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:09:44.807498   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:09:44.807513   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:09:44.819503   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:09:44.819515   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:09:44.874935   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:09:44.874950   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:09:44.874956   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:09:44.888884   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:09:44.888897   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:09:46.938733   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049810086s)
	I0114 03:09:49.439342   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:49.579559   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:09:49.603953   17797 logs.go:274] 0 containers: []
	W0114 03:09:49.603967   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:09:49.604054   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:09:49.628027   17797 logs.go:274] 0 containers: []
	W0114 03:09:49.628041   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:09:49.628133   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:09:49.651899   17797 logs.go:274] 0 containers: []
	W0114 03:09:49.651918   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:09:49.652013   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:09:49.677819   17797 logs.go:274] 0 containers: []
	W0114 03:09:49.677832   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:09:49.677926   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:09:49.725262   17797 logs.go:274] 0 containers: []
	W0114 03:09:49.725275   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:09:49.725362   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:09:49.749545   17797 logs.go:274] 0 containers: []
	W0114 03:09:49.749558   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:09:49.749653   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:09:49.773835   17797 logs.go:274] 0 containers: []
	W0114 03:09:49.773848   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:09:49.773960   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:09:49.797246   17797 logs.go:274] 0 containers: []
	W0114 03:09:49.797260   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:09:49.797267   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:09:49.797276   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:09:49.850391   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:09:49.850407   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:09:49.850414   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:09:49.864311   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:09:49.864324   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:09:51.913213   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048860724s)
	I0114 03:09:51.913328   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:09:51.913336   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:09:51.951369   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:09:51.951388   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:09:54.464510   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:54.580183   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:09:54.605279   17797 logs.go:274] 0 containers: []
	W0114 03:09:54.605291   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:09:54.605380   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:09:54.627981   17797 logs.go:274] 0 containers: []
	W0114 03:09:54.627995   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:09:54.628081   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:09:54.650461   17797 logs.go:274] 0 containers: []
	W0114 03:09:54.650473   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:09:54.650558   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:09:54.673879   17797 logs.go:274] 0 containers: []
	W0114 03:09:54.673894   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:09:54.673976   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:09:54.697005   17797 logs.go:274] 0 containers: []
	W0114 03:09:54.697018   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:09:54.697102   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:09:54.720000   17797 logs.go:274] 0 containers: []
	W0114 03:09:54.720016   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:09:54.720127   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:09:54.744842   17797 logs.go:274] 0 containers: []
	W0114 03:09:54.744856   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:09:54.744948   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:09:54.768553   17797 logs.go:274] 0 containers: []
	W0114 03:09:54.768567   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:09:54.768575   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:09:54.768582   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:09:54.806759   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:09:54.806773   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:09:54.818869   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:09:54.818886   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:09:54.872326   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:09:54.872337   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:09:54.872344   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:09:54.886097   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:09:54.886112   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:09:56.937087   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050949483s)
	I0114 03:09:59.437445   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:09:59.579361   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:09:59.605238   17797 logs.go:274] 0 containers: []
	W0114 03:09:59.605250   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:09:59.605337   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:09:59.628749   17797 logs.go:274] 0 containers: []
	W0114 03:09:59.628762   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:09:59.628856   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:09:59.653427   17797 logs.go:274] 0 containers: []
	W0114 03:09:59.653441   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:09:59.653529   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:09:59.677127   17797 logs.go:274] 0 containers: []
	W0114 03:09:59.677139   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:09:59.677227   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:09:59.701122   17797 logs.go:274] 0 containers: []
	W0114 03:09:59.701134   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:09:59.701218   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:09:59.724484   17797 logs.go:274] 0 containers: []
	W0114 03:09:59.724498   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:09:59.724582   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:09:59.746813   17797 logs.go:274] 0 containers: []
	W0114 03:09:59.746832   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:09:59.746920   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:09:59.770969   17797 logs.go:274] 0 containers: []
	W0114 03:09:59.770990   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:09:59.771004   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:09:59.771030   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:10:01.821819   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050752696s)
	I0114 03:10:01.821926   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:10:01.821933   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:10:01.860288   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:10:01.860302   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:10:01.872432   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:10:01.872446   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:10:01.926171   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:10:01.926188   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:10:01.926195   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:10:04.441389   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:10:04.579411   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:10:04.605134   17797 logs.go:274] 0 containers: []
	W0114 03:10:04.605147   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:10:04.605236   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:10:04.627383   17797 logs.go:274] 0 containers: []
	W0114 03:10:04.627396   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:10:04.627477   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:10:04.650233   17797 logs.go:274] 0 containers: []
	W0114 03:10:04.650253   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:10:04.650335   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:10:04.673069   17797 logs.go:274] 0 containers: []
	W0114 03:10:04.673084   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:10:04.673177   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:10:04.696522   17797 logs.go:274] 0 containers: []
	W0114 03:10:04.696535   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:10:04.696620   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:10:04.742273   17797 logs.go:274] 0 containers: []
	W0114 03:10:04.742287   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:10:04.742379   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:10:04.767048   17797 logs.go:274] 0 containers: []
	W0114 03:10:04.767063   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:10:04.767156   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:10:04.789276   17797 logs.go:274] 0 containers: []
	W0114 03:10:04.789288   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:10:04.789296   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:10:04.789303   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:10:04.827671   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:10:04.827687   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:10:04.839634   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:10:04.839669   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:10:04.894820   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:10:04.894837   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:10:04.894846   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:10:04.908501   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:10:04.908514   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:10:06.957928   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049386985s)
	I0114 03:10:09.458478   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:10:09.579403   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:10:09.603879   17797 logs.go:274] 0 containers: []
	W0114 03:10:09.603894   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:10:09.603978   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:10:09.626510   17797 logs.go:274] 0 containers: []
	W0114 03:10:09.626523   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:10:09.626606   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:10:09.649509   17797 logs.go:274] 0 containers: []
	W0114 03:10:09.649522   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:10:09.649606   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:10:09.673122   17797 logs.go:274] 0 containers: []
	W0114 03:10:09.673135   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:10:09.673218   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:10:09.696478   17797 logs.go:274] 0 containers: []
	W0114 03:10:09.696492   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:10:09.696571   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:10:09.719255   17797 logs.go:274] 0 containers: []
	W0114 03:10:09.719267   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:10:09.719354   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:10:09.743635   17797 logs.go:274] 0 containers: []
	W0114 03:10:09.743649   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:10:09.743741   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:10:09.767852   17797 logs.go:274] 0 containers: []
	W0114 03:10:09.767866   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:10:09.767873   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:10:09.767880   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:10:09.782063   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:10:09.782077   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:10:11.832585   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050482509s)
	I0114 03:10:11.832693   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:10:11.832701   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:10:11.870603   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:10:11.870616   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:10:11.882493   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:10:11.882506   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:10:11.937195   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:10:14.438097   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:10:14.579449   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:10:14.604863   17797 logs.go:274] 0 containers: []
	W0114 03:10:14.604877   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:10:14.604961   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:10:14.629427   17797 logs.go:274] 0 containers: []
	W0114 03:10:14.629443   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:10:14.629526   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:10:14.651971   17797 logs.go:274] 0 containers: []
	W0114 03:10:14.651983   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:10:14.652069   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:10:14.674605   17797 logs.go:274] 0 containers: []
	W0114 03:10:14.674617   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:10:14.674699   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:10:14.696961   17797 logs.go:274] 0 containers: []
	W0114 03:10:14.696974   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:10:14.697057   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:10:14.719126   17797 logs.go:274] 0 containers: []
	W0114 03:10:14.719138   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:10:14.719218   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:10:14.743545   17797 logs.go:274] 0 containers: []
	W0114 03:10:14.743558   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:10:14.743645   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:10:14.766503   17797 logs.go:274] 0 containers: []
	W0114 03:10:14.766516   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:10:14.766523   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:10:14.766532   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:10:14.803610   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:10:14.803627   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:10:14.816133   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:10:14.816151   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:10:14.870731   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:10:14.870744   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:10:14.870752   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:10:14.884457   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:10:14.884471   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:10:16.934271   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049772833s)
	I0114 03:10:19.434886   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:10:19.579841   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:10:19.604863   17797 logs.go:274] 0 containers: []
	W0114 03:10:19.604876   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:10:19.604957   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:10:19.627287   17797 logs.go:274] 0 containers: []
	W0114 03:10:19.627300   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:10:19.627382   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:10:19.650413   17797 logs.go:274] 0 containers: []
	W0114 03:10:19.650428   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:10:19.650523   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:10:19.674494   17797 logs.go:274] 0 containers: []
	W0114 03:10:19.674508   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:10:19.674593   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:10:19.723599   17797 logs.go:274] 0 containers: []
	W0114 03:10:19.723616   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:10:19.723708   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:10:19.747263   17797 logs.go:274] 0 containers: []
	W0114 03:10:19.747283   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:10:19.747384   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:10:19.771532   17797 logs.go:274] 0 containers: []
	W0114 03:10:19.771546   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:10:19.771629   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:10:19.793822   17797 logs.go:274] 0 containers: []
	W0114 03:10:19.793835   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:10:19.793843   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:10:19.793851   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:10:21.844529   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050651398s)
	I0114 03:10:21.844635   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:10:21.844642   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:10:21.882462   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:10:21.882478   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:10:21.894743   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:10:21.894758   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:10:21.949888   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:10:21.949902   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:10:21.949909   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:10:24.464043   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:10:24.579810   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:10:24.607838   17797 logs.go:274] 0 containers: []
	W0114 03:10:24.607852   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:10:24.607958   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:10:24.632576   17797 logs.go:274] 0 containers: []
	W0114 03:10:24.632592   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:10:24.632686   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:10:24.657697   17797 logs.go:274] 0 containers: []
	W0114 03:10:24.657712   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:10:24.657802   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:10:24.681454   17797 logs.go:274] 0 containers: []
	W0114 03:10:24.681467   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:10:24.681553   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:10:24.707255   17797 logs.go:274] 0 containers: []
	W0114 03:10:24.707270   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:10:24.707364   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:10:24.731230   17797 logs.go:274] 0 containers: []
	W0114 03:10:24.731244   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:10:24.731328   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:10:24.756507   17797 logs.go:274] 0 containers: []
	W0114 03:10:24.756522   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:10:24.756610   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:10:24.780670   17797 logs.go:274] 0 containers: []
	W0114 03:10:24.780684   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:10:24.780692   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:10:24.780699   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:10:24.795890   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:10:24.795905   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:10:26.876170   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.080237707s)
	I0114 03:10:26.876295   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:10:26.876303   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:10:26.915393   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:10:26.915412   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:10:26.928741   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:10:26.928757   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:10:26.985204   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:10:29.486771   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:10:29.579768   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:10:29.605504   17797 logs.go:274] 0 containers: []
	W0114 03:10:29.605518   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:10:29.605606   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:10:29.629306   17797 logs.go:274] 0 containers: []
	W0114 03:10:29.629320   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:10:29.629410   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:10:29.652483   17797 logs.go:274] 0 containers: []
	W0114 03:10:29.652496   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:10:29.652579   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:10:29.677071   17797 logs.go:274] 0 containers: []
	W0114 03:10:29.677085   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:10:29.677206   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:10:29.699411   17797 logs.go:274] 0 containers: []
	W0114 03:10:29.699424   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:10:29.699516   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:10:29.723278   17797 logs.go:274] 0 containers: []
	W0114 03:10:29.723293   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:10:29.723379   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:10:29.747142   17797 logs.go:274] 0 containers: []
	W0114 03:10:29.747155   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:10:29.747249   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:10:29.770780   17797 logs.go:274] 0 containers: []
	W0114 03:10:29.770793   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:10:29.770801   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:10:29.770808   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:10:29.808593   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:10:29.808607   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:10:29.821110   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:10:29.821125   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:10:29.875108   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:10:29.875119   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:10:29.875125   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:10:29.889076   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:10:29.889088   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:10:31.949209   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.06009251s)
	I0114 03:10:34.449741   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:10:34.581143   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:10:34.608331   17797 logs.go:274] 0 containers: []
	W0114 03:10:34.608345   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:10:34.608424   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:10:34.632081   17797 logs.go:274] 0 containers: []
	W0114 03:10:34.632095   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:10:34.632181   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:10:34.655511   17797 logs.go:274] 0 containers: []
	W0114 03:10:34.655524   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:10:34.655605   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:10:34.678323   17797 logs.go:274] 0 containers: []
	W0114 03:10:34.678337   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:10:34.678426   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:10:34.726959   17797 logs.go:274] 0 containers: []
	W0114 03:10:34.726974   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:10:34.727067   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:10:34.749890   17797 logs.go:274] 0 containers: []
	W0114 03:10:34.749905   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:10:34.749989   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:10:34.773019   17797 logs.go:274] 0 containers: []
	W0114 03:10:34.773034   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:10:34.773117   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:10:34.796219   17797 logs.go:274] 0 containers: []
	W0114 03:10:34.796235   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:10:34.796243   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:10:34.796251   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:10:34.838702   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:10:34.838724   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:10:34.852907   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:10:34.852922   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:10:34.908748   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:10:34.908761   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:10:34.908768   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:10:34.922877   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:10:34.922890   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:10:36.972770   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049854176s)
	I0114 03:10:39.475116   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:10:39.581683   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:10:39.607662   17797 logs.go:274] 0 containers: []
	W0114 03:10:39.607675   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:10:39.607761   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:10:39.631003   17797 logs.go:274] 0 containers: []
	W0114 03:10:39.631017   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:10:39.631101   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:10:39.654958   17797 logs.go:274] 0 containers: []
	W0114 03:10:39.654972   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:10:39.655056   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:10:39.678066   17797 logs.go:274] 0 containers: []
	W0114 03:10:39.678080   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:10:39.678166   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:10:39.701278   17797 logs.go:274] 0 containers: []
	W0114 03:10:39.701293   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:10:39.701376   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:10:39.724376   17797 logs.go:274] 0 containers: []
	W0114 03:10:39.724393   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:10:39.724475   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:10:39.747398   17797 logs.go:274] 0 containers: []
	W0114 03:10:39.747412   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:10:39.747494   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:10:39.770519   17797 logs.go:274] 0 containers: []
	W0114 03:10:39.770532   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:10:39.770540   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:10:39.770548   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:10:39.808850   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:10:39.808863   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:10:39.821543   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:10:39.821556   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:10:39.878901   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:10:39.878920   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:10:39.878926   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:10:39.893068   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:10:39.893084   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:10:41.944534   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051424166s)
	I0114 03:10:44.445670   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:10:44.579497   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:10:44.603040   17797 logs.go:274] 0 containers: []
	W0114 03:10:44.603052   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:10:44.603134   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:10:44.626786   17797 logs.go:274] 0 containers: []
	W0114 03:10:44.626798   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:10:44.626881   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:10:44.649864   17797 logs.go:274] 0 containers: []
	W0114 03:10:44.649878   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:10:44.649960   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:10:44.672138   17797 logs.go:274] 0 containers: []
	W0114 03:10:44.672150   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:10:44.672232   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:10:44.695170   17797 logs.go:274] 0 containers: []
	W0114 03:10:44.695183   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:10:44.695300   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:10:44.718068   17797 logs.go:274] 0 containers: []
	W0114 03:10:44.718082   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:10:44.718163   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:10:44.741077   17797 logs.go:274] 0 containers: []
	W0114 03:10:44.741089   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:10:44.741174   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:10:44.764720   17797 logs.go:274] 0 containers: []
	W0114 03:10:44.764733   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:10:44.764741   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:10:44.764747   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:10:44.819522   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:10:44.819538   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:10:44.819548   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:10:44.833626   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:10:44.833639   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:10:46.881095   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047429684s)
	I0114 03:10:46.881211   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:10:46.881220   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:10:46.922897   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:10:46.922914   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:10:49.436825   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:10:49.579688   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:10:49.603374   17797 logs.go:274] 0 containers: []
	W0114 03:10:49.603387   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:10:49.603457   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:10:49.633452   17797 logs.go:274] 0 containers: []
	W0114 03:10:49.633470   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:10:49.633585   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:10:49.667613   17797 logs.go:274] 0 containers: []
	W0114 03:10:49.667672   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:10:49.667788   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:10:49.695783   17797 logs.go:274] 0 containers: []
	W0114 03:10:49.695796   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:10:49.695881   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:10:49.735337   17797 logs.go:274] 0 containers: []
	W0114 03:10:49.735357   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:10:49.735508   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:10:49.766785   17797 logs.go:274] 0 containers: []
	W0114 03:10:49.766799   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:10:49.766897   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:10:49.790519   17797 logs.go:274] 0 containers: []
	W0114 03:10:49.790531   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:10:49.790610   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:10:49.812793   17797 logs.go:274] 0 containers: []
	W0114 03:10:49.812807   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:10:49.812815   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:10:49.812838   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:10:51.867686   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054820355s)
	I0114 03:10:51.867791   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:10:51.867798   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:10:51.904770   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:10:51.904788   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:10:51.917095   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:10:51.917110   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:10:51.973536   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:10:51.973549   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:10:51.973555   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:10:54.489136   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:10:54.579647   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:10:54.604608   17797 logs.go:274] 0 containers: []
	W0114 03:10:54.604620   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:10:54.604706   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:10:54.628792   17797 logs.go:274] 0 containers: []
	W0114 03:10:54.628819   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:10:54.628904   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:10:54.655598   17797 logs.go:274] 0 containers: []
	W0114 03:10:54.655614   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:10:54.655714   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:10:54.682833   17797 logs.go:274] 0 containers: []
	W0114 03:10:54.682854   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:10:54.682984   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:10:54.711553   17797 logs.go:274] 0 containers: []
	W0114 03:10:54.711566   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:10:54.711663   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:10:54.739249   17797 logs.go:274] 0 containers: []
	W0114 03:10:54.739264   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:10:54.739361   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:10:54.769612   17797 logs.go:274] 0 containers: []
	W0114 03:10:54.769625   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:10:54.769708   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:10:54.794070   17797 logs.go:274] 0 containers: []
	W0114 03:10:54.794083   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:10:54.794092   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:10:54.794099   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:10:54.810449   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:10:54.810464   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:10:56.872553   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.062058422s)
	I0114 03:10:56.872730   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:10:56.872742   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:10:56.916687   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:10:56.916705   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:10:56.931739   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:10:56.931756   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:10:57.001393   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:10:59.502882   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:10:59.581773   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:10:59.604982   17797 logs.go:274] 0 containers: []
	W0114 03:10:59.604995   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:10:59.605084   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:10:59.627587   17797 logs.go:274] 0 containers: []
	W0114 03:10:59.627604   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:10:59.627700   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:10:59.653942   17797 logs.go:274] 0 containers: []
	W0114 03:10:59.653959   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:10:59.654067   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:10:59.687034   17797 logs.go:274] 0 containers: []
	W0114 03:10:59.687049   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:10:59.687144   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:10:59.712212   17797 logs.go:274] 0 containers: []
	W0114 03:10:59.712225   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:10:59.712309   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:10:59.739249   17797 logs.go:274] 0 containers: []
	W0114 03:10:59.739263   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:10:59.739353   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:10:59.767223   17797 logs.go:274] 0 containers: []
	W0114 03:10:59.767238   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:10:59.767330   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:10:59.793944   17797 logs.go:274] 0 containers: []
	W0114 03:10:59.793958   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:10:59.793965   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:10:59.793973   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:10:59.856594   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:10:59.856608   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:10:59.856617   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:10:59.873269   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:10:59.873284   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:11:01.936294   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.062978555s)
	I0114 03:11:01.936457   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:11:01.936471   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:11:01.999498   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:11:01.999517   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:11:04.514581   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:11:04.579733   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:11:04.616643   17797 logs.go:274] 0 containers: []
	W0114 03:11:04.616658   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:11:04.616749   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:11:04.646672   17797 logs.go:274] 0 containers: []
	W0114 03:11:04.646685   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:11:04.646781   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:11:04.676796   17797 logs.go:274] 0 containers: []
	W0114 03:11:04.676811   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:11:04.676912   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:11:04.710019   17797 logs.go:274] 0 containers: []
	W0114 03:11:04.710032   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:11:04.710120   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:11:04.754229   17797 logs.go:274] 0 containers: []
	W0114 03:11:04.754243   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:11:04.754347   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:11:04.785788   17797 logs.go:274] 0 containers: []
	W0114 03:11:04.785804   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:11:04.785904   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:11:04.815536   17797 logs.go:274] 0 containers: []
	W0114 03:11:04.815552   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:11:04.815650   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:11:04.846584   17797 logs.go:274] 0 containers: []
	W0114 03:11:04.846601   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:11:04.846612   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:11:04.846622   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:11:04.861837   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:11:04.861857   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:11:04.935359   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:11:04.935382   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:11:04.935397   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:11:04.952478   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:11:04.952498   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:11:07.015236   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.062709777s)
	I0114 03:11:07.015373   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:11:07.015381   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:11:09.556695   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:11:09.579970   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:11:09.606101   17797 logs.go:274] 0 containers: []
	W0114 03:11:09.606114   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:11:09.606203   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:11:09.630898   17797 logs.go:274] 0 containers: []
	W0114 03:11:09.630912   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:11:09.630995   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:11:09.655673   17797 logs.go:274] 0 containers: []
	W0114 03:11:09.655687   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:11:09.655775   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:11:09.679879   17797 logs.go:274] 0 containers: []
	W0114 03:11:09.679894   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:11:09.679979   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:11:09.702368   17797 logs.go:274] 0 containers: []
	W0114 03:11:09.702382   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:11:09.702474   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:11:09.725221   17797 logs.go:274] 0 containers: []
	W0114 03:11:09.725235   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:11:09.725317   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:11:09.749719   17797 logs.go:274] 0 containers: []
	W0114 03:11:09.749731   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:11:09.749815   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:11:09.773968   17797 logs.go:274] 0 containers: []
	W0114 03:11:09.773980   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:11:09.773988   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:11:09.773996   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:11:09.817448   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:11:09.817464   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:11:09.831086   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:11:09.831100   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:11:09.886314   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:11:09.886333   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:11:09.886339   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:11:09.900982   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:11:09.900996   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:11:11.952604   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051582455s)
	I0114 03:11:14.453640   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:11:14.579711   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:11:14.607134   17797 logs.go:274] 0 containers: []
	W0114 03:11:14.607149   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:11:14.607250   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:11:14.632166   17797 logs.go:274] 0 containers: []
	W0114 03:11:14.632179   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:11:14.632263   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:11:14.655801   17797 logs.go:274] 0 containers: []
	W0114 03:11:14.655815   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:11:14.655895   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:11:14.680011   17797 logs.go:274] 0 containers: []
	W0114 03:11:14.680025   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:11:14.680116   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:11:14.704131   17797 logs.go:274] 0 containers: []
	W0114 03:11:14.704144   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:11:14.704230   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:11:14.728846   17797 logs.go:274] 0 containers: []
	W0114 03:11:14.728860   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:11:14.728945   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:11:14.753677   17797 logs.go:274] 0 containers: []
	W0114 03:11:14.753691   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:11:14.753778   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:11:14.779047   17797 logs.go:274] 0 containers: []
	W0114 03:11:14.779060   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:11:14.779068   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:11:14.779077   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:11:14.790983   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:11:14.790996   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:11:14.846184   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:11:14.846197   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:11:14.846203   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:11:14.860232   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:11:14.860245   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:11:16.913161   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052889989s)
	I0114 03:11:16.913296   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:11:16.913304   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:11:19.451812   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:11:19.581907   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:11:19.606845   17797 logs.go:274] 0 containers: []
	W0114 03:11:19.606860   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:11:19.606946   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:11:19.630354   17797 logs.go:274] 0 containers: []
	W0114 03:11:19.630367   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:11:19.630452   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:11:19.653641   17797 logs.go:274] 0 containers: []
	W0114 03:11:19.653655   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:11:19.653740   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:11:19.678277   17797 logs.go:274] 0 containers: []
	W0114 03:11:19.678292   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:11:19.678400   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:11:19.727258   17797 logs.go:274] 0 containers: []
	W0114 03:11:19.727272   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:11:19.727353   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:11:19.750740   17797 logs.go:274] 0 containers: []
	W0114 03:11:19.750754   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:11:19.750844   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:11:19.775111   17797 logs.go:274] 0 containers: []
	W0114 03:11:19.775124   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:11:19.775196   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:11:19.799835   17797 logs.go:274] 0 containers: []
	W0114 03:11:19.799849   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:11:19.799858   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:11:19.799865   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:11:19.854390   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:11:19.854402   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:11:19.854409   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:11:19.868057   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:11:19.868070   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:11:21.913637   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045536812s)
	I0114 03:11:21.913765   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:11:21.913777   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:11:21.958275   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:11:21.958291   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:11:24.472470   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:11:24.580948   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:11:24.603828   17797 logs.go:274] 0 containers: []
	W0114 03:11:24.603840   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:11:24.603927   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:11:24.628801   17797 logs.go:274] 0 containers: []
	W0114 03:11:24.628833   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:11:24.628949   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:11:24.655176   17797 logs.go:274] 0 containers: []
	W0114 03:11:24.655189   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:11:24.655275   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:11:24.681056   17797 logs.go:274] 0 containers: []
	W0114 03:11:24.681070   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:11:24.681192   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:11:24.709062   17797 logs.go:274] 0 containers: []
	W0114 03:11:24.709076   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:11:24.709152   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:11:24.736203   17797 logs.go:274] 0 containers: []
	W0114 03:11:24.736220   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:11:24.736317   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:11:24.761518   17797 logs.go:274] 0 containers: []
	W0114 03:11:24.761534   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:11:24.761624   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:11:24.787391   17797 logs.go:274] 0 containers: []
	W0114 03:11:24.787404   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:11:24.787413   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:11:24.787422   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:11:24.829452   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:11:24.829472   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:11:24.843992   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:11:24.844010   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:11:24.904628   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:11:24.904639   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:11:24.904645   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:11:24.920988   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:11:24.921006   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:11:26.976978   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055923303s)
	I0114 03:11:29.477363   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:11:29.579967   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:11:29.604686   17797 logs.go:274] 0 containers: []
	W0114 03:11:29.604699   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:11:29.604776   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:11:29.630131   17797 logs.go:274] 0 containers: []
	W0114 03:11:29.630144   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:11:29.630226   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:11:29.660843   17797 logs.go:274] 0 containers: []
	W0114 03:11:29.660857   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:11:29.660942   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:11:29.685261   17797 logs.go:274] 0 containers: []
	W0114 03:11:29.685290   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:11:29.685400   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:11:29.711166   17797 logs.go:274] 0 containers: []
	W0114 03:11:29.711179   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:11:29.711265   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:11:29.738686   17797 logs.go:274] 0 containers: []
	W0114 03:11:29.738698   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:11:29.738775   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:11:29.765195   17797 logs.go:274] 0 containers: []
	W0114 03:11:29.765208   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:11:29.765299   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:11:29.790167   17797 logs.go:274] 0 containers: []
	W0114 03:11:29.790183   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:11:29.790191   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:11:29.790198   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:11:29.831680   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:11:29.831700   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:11:29.845116   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:11:29.845131   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:11:29.905367   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:11:29.905379   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:11:29.905387   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:11:29.920736   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:11:29.920750   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:11:31.979716   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058939606s)
	I0114 03:11:34.480082   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:11:34.580112   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:11:34.612132   17797 logs.go:274] 0 containers: []
	W0114 03:11:34.612148   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:11:34.612240   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:11:34.639373   17797 logs.go:274] 0 containers: []
	W0114 03:11:34.639390   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:11:34.639502   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:11:34.675872   17797 logs.go:274] 0 containers: []
	W0114 03:11:34.675887   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:11:34.675975   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:11:34.708084   17797 logs.go:274] 0 containers: []
	W0114 03:11:34.708105   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:11:34.708279   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:11:34.741143   17797 logs.go:274] 0 containers: []
	W0114 03:11:34.741158   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:11:34.741251   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:11:34.780307   17797 logs.go:274] 0 containers: []
	W0114 03:11:34.780321   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:11:34.780420   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:11:34.809603   17797 logs.go:274] 0 containers: []
	W0114 03:11:34.809617   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:11:34.809707   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:11:34.837847   17797 logs.go:274] 0 containers: []
	W0114 03:11:34.837866   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:11:34.837877   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:11:34.837904   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:11:34.892112   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:11:34.892140   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:11:34.908145   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:11:34.908173   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:11:34.980021   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:11:34.980038   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:11:34.980047   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:11:34.998052   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:11:34.998070   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:11:37.063419   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065318673s)
	I0114 03:11:39.563825   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:11:39.579941   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:11:39.606055   17797 logs.go:274] 0 containers: []
	W0114 03:11:39.606068   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:11:39.606154   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:11:39.631340   17797 logs.go:274] 0 containers: []
	W0114 03:11:39.631354   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:11:39.631438   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:11:39.656376   17797 logs.go:274] 0 containers: []
	W0114 03:11:39.656389   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:11:39.656477   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:11:39.682494   17797 logs.go:274] 0 containers: []
	W0114 03:11:39.682507   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:11:39.682577   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:11:39.707920   17797 logs.go:274] 0 containers: []
	W0114 03:11:39.707937   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:11:39.708021   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:11:39.732276   17797 logs.go:274] 0 containers: []
	W0114 03:11:39.732294   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:11:39.732383   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:11:39.757163   17797 logs.go:274] 0 containers: []
	W0114 03:11:39.757177   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:11:39.757270   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:11:39.784266   17797 logs.go:274] 0 containers: []
	W0114 03:11:39.784279   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:11:39.784288   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:11:39.784296   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:11:39.830083   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:11:39.830107   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:11:39.845056   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:11:39.845070   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:11:39.903312   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:11:39.903325   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:11:39.903332   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:11:39.919229   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:11:39.919245   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:11:41.973233   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053960941s)
	I0114 03:11:44.473747   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:11:44.580240   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:11:44.604544   17797 logs.go:274] 0 containers: []
	W0114 03:11:44.604558   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:11:44.604638   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:11:44.627344   17797 logs.go:274] 0 containers: []
	W0114 03:11:44.627359   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:11:44.627455   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:11:44.651448   17797 logs.go:274] 0 containers: []
	W0114 03:11:44.651461   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:11:44.651544   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:11:44.675220   17797 logs.go:274] 0 containers: []
	W0114 03:11:44.675234   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:11:44.675317   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:11:44.700583   17797 logs.go:274] 0 containers: []
	W0114 03:11:44.700612   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:11:44.700733   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:11:44.728107   17797 logs.go:274] 0 containers: []
	W0114 03:11:44.728124   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:11:44.728225   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:11:44.751166   17797 logs.go:274] 0 containers: []
	W0114 03:11:44.751180   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:11:44.751287   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:11:44.773181   17797 logs.go:274] 0 containers: []
	W0114 03:11:44.773196   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:11:44.773207   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:11:44.773217   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:11:44.816687   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:11:44.816703   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:11:44.829849   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:11:44.829862   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:11:44.884834   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:11:44.884845   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:11:44.884852   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:11:44.898603   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:11:44.898614   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:11:46.949360   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050719447s)
	I0114 03:11:49.449724   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:11:49.580371   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:11:49.606002   17797 logs.go:274] 0 containers: []
	W0114 03:11:49.606015   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:11:49.606094   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:11:49.629749   17797 logs.go:274] 0 containers: []
	W0114 03:11:49.629761   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:11:49.629841   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:11:49.653579   17797 logs.go:274] 0 containers: []
	W0114 03:11:49.653591   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:11:49.653673   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:11:49.678884   17797 logs.go:274] 0 containers: []
	W0114 03:11:49.678896   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:11:49.678978   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:11:49.722435   17797 logs.go:274] 0 containers: []
	W0114 03:11:49.722452   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:11:49.722564   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:11:49.747605   17797 logs.go:274] 0 containers: []
	W0114 03:11:49.747615   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:11:49.747698   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:11:49.772404   17797 logs.go:274] 0 containers: []
	W0114 03:11:49.772415   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:11:49.772503   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:11:49.796439   17797 logs.go:274] 0 containers: []
	W0114 03:11:49.796452   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:11:49.796459   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:11:49.796469   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:11:49.853474   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:11:49.853486   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:11:49.853492   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:11:49.867248   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:11:49.867261   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:11:51.913183   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045893593s)
	I0114 03:11:51.913303   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:11:51.913312   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:11:51.957850   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:11:51.957872   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:11:54.471540   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:11:54.580107   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:11:54.604130   17797 logs.go:274] 0 containers: []
	W0114 03:11:54.604144   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:11:54.604228   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:11:54.631637   17797 logs.go:274] 0 containers: []
	W0114 03:11:54.631653   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:11:54.631761   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:11:54.656359   17797 logs.go:274] 0 containers: []
	W0114 03:11:54.656373   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:11:54.656455   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:11:54.679795   17797 logs.go:274] 0 containers: []
	W0114 03:11:54.679808   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:11:54.679894   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:11:54.705108   17797 logs.go:274] 0 containers: []
	W0114 03:11:54.705122   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:11:54.705204   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:11:54.728197   17797 logs.go:274] 0 containers: []
	W0114 03:11:54.728210   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:11:54.728297   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:11:54.754034   17797 logs.go:274] 0 containers: []
	W0114 03:11:54.754047   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:11:54.754150   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:11:54.778296   17797 logs.go:274] 0 containers: []
	W0114 03:11:54.778309   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:11:54.778316   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:11:54.778325   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:11:54.816642   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:11:54.816657   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:11:54.829476   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:11:54.829497   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:11:54.886324   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:11:54.886336   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:11:54.886342   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:11:54.901225   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:11:54.901240   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:11:56.951185   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049918255s)
	I0114 03:11:59.451483   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:11:59.582230   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:11:59.607032   17797 logs.go:274] 0 containers: []
	W0114 03:11:59.607045   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:11:59.607127   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:11:59.630406   17797 logs.go:274] 0 containers: []
	W0114 03:11:59.630419   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:11:59.630500   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:11:59.653326   17797 logs.go:274] 0 containers: []
	W0114 03:11:59.653338   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:11:59.653420   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:11:59.675654   17797 logs.go:274] 0 containers: []
	W0114 03:11:59.675667   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:11:59.675759   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:11:59.699299   17797 logs.go:274] 0 containers: []
	W0114 03:11:59.699312   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:11:59.699396   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:11:59.723344   17797 logs.go:274] 0 containers: []
	W0114 03:11:59.723358   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:11:59.723434   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:11:59.746342   17797 logs.go:274] 0 containers: []
	W0114 03:11:59.746355   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:11:59.746441   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:11:59.768348   17797 logs.go:274] 0 containers: []
	W0114 03:11:59.768367   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:11:59.768374   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:11:59.768383   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:11:59.807580   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:11:59.807596   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:11:59.819804   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:11:59.819818   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:11:59.874104   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:11:59.874117   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:11:59.874123   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:11:59.888164   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:11:59.888177   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:12:01.938613   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050409672s)
	I0114 03:12:04.440231   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:12:04.580681   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:12:04.605619   17797 logs.go:274] 0 containers: []
	W0114 03:12:04.605634   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:12:04.605718   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:12:04.628906   17797 logs.go:274] 0 containers: []
	W0114 03:12:04.628921   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:12:04.629005   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:12:04.651741   17797 logs.go:274] 0 containers: []
	W0114 03:12:04.651759   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:12:04.651858   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:12:04.675559   17797 logs.go:274] 0 containers: []
	W0114 03:12:04.675572   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:12:04.675654   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:12:04.721115   17797 logs.go:274] 0 containers: []
	W0114 03:12:04.721137   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:12:04.721278   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:12:04.745655   17797 logs.go:274] 0 containers: []
	W0114 03:12:04.745669   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:12:04.745751   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:12:04.769106   17797 logs.go:274] 0 containers: []
	W0114 03:12:04.769120   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:12:04.769206   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:12:04.793322   17797 logs.go:274] 0 containers: []
	W0114 03:12:04.793336   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:12:04.793352   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:12:04.793360   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:12:04.805540   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:12:04.805551   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:12:04.860774   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:12:04.860791   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:12:04.860798   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:12:04.874696   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:12:04.874709   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:12:06.923338   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048599954s)
	I0114 03:12:06.923481   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:12:06.923492   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:12:09.466101   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:12:09.580260   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:12:09.616599   17797 logs.go:274] 0 containers: []
	W0114 03:12:09.616617   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:12:09.616706   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:12:09.653067   17797 logs.go:274] 0 containers: []
	W0114 03:12:09.653084   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:12:09.653204   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:12:09.698498   17797 logs.go:274] 0 containers: []
	W0114 03:12:09.698512   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:12:09.698594   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:12:09.721893   17797 logs.go:274] 0 containers: []
	W0114 03:12:09.721905   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:12:09.721988   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:12:09.753971   17797 logs.go:274] 0 containers: []
	W0114 03:12:09.753986   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:12:09.754106   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:12:09.791410   17797 logs.go:274] 0 containers: []
	W0114 03:12:09.791425   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:12:09.791513   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:12:09.824140   17797 logs.go:274] 0 containers: []
	W0114 03:12:09.824157   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:12:09.824277   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:12:09.869061   17797 logs.go:274] 0 containers: []
	W0114 03:12:09.869076   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:12:09.869083   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:12:09.869091   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:12:09.941017   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:12:09.941035   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:12:09.941044   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:12:09.960591   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:12:09.960609   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:12:12.014299   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053660782s)
	I0114 03:12:12.014406   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:12:12.014413   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:12:12.052698   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:12:12.052712   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:12:14.565666   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:12:14.580197   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:12:14.603719   17797 logs.go:274] 0 containers: []
	W0114 03:12:14.603732   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:12:14.603816   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:12:14.627754   17797 logs.go:274] 0 containers: []
	W0114 03:12:14.627766   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:12:14.627867   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:12:14.651763   17797 logs.go:274] 0 containers: []
	W0114 03:12:14.651776   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:12:14.651868   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:12:14.675649   17797 logs.go:274] 0 containers: []
	W0114 03:12:14.675661   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:12:14.675746   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:12:14.698696   17797 logs.go:274] 0 containers: []
	W0114 03:12:14.698710   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:12:14.698793   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:12:14.721538   17797 logs.go:274] 0 containers: []
	W0114 03:12:14.721551   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:12:14.721639   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:12:14.744959   17797 logs.go:274] 0 containers: []
	W0114 03:12:14.744972   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:12:14.745057   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:12:14.769176   17797 logs.go:274] 0 containers: []
	W0114 03:12:14.769189   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:12:14.769197   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:12:14.769204   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:12:14.807397   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:12:14.807411   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:12:14.819889   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:12:14.819904   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:12:14.875220   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:12:14.875238   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:12:14.875251   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:12:14.890594   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:12:14.890608   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:12:16.940238   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049603777s)
	I0114 03:12:19.442625   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:12:19.581139   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:12:19.607668   17797 logs.go:274] 0 containers: []
	W0114 03:12:19.607681   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:12:19.607757   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:12:19.631392   17797 logs.go:274] 0 containers: []
	W0114 03:12:19.631404   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:12:19.631490   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:12:19.655389   17797 logs.go:274] 0 containers: []
	W0114 03:12:19.655402   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:12:19.655485   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:12:19.679360   17797 logs.go:274] 0 containers: []
	W0114 03:12:19.679374   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:12:19.679458   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:12:19.727768   17797 logs.go:274] 0 containers: []
	W0114 03:12:19.727782   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:12:19.727865   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:12:19.753755   17797 logs.go:274] 0 containers: []
	W0114 03:12:19.753768   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:12:19.753851   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:12:19.776651   17797 logs.go:274] 0 containers: []
	W0114 03:12:19.776665   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:12:19.776752   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:12:19.799802   17797 logs.go:274] 0 containers: []
	W0114 03:12:19.799816   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:12:19.799823   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:12:19.799831   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:12:19.854509   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:12:19.854522   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:12:19.854529   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:12:19.868377   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:12:19.868390   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:12:21.919059   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050642958s)
	I0114 03:12:21.919178   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:12:21.919185   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:12:21.957040   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:12:21.957058   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:12:24.470171   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:12:24.582343   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:12:24.609930   17797 logs.go:274] 0 containers: []
	W0114 03:12:24.609944   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:12:24.610029   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:12:24.633457   17797 logs.go:274] 0 containers: []
	W0114 03:12:24.633470   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:12:24.633555   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:12:24.658353   17797 logs.go:274] 0 containers: []
	W0114 03:12:24.658367   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:12:24.658450   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:12:24.681864   17797 logs.go:274] 0 containers: []
	W0114 03:12:24.681876   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:12:24.681968   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:12:24.705800   17797 logs.go:274] 0 containers: []
	W0114 03:12:24.705814   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:12:24.705896   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:12:24.729956   17797 logs.go:274] 0 containers: []
	W0114 03:12:24.729968   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:12:24.730049   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:12:24.753118   17797 logs.go:274] 0 containers: []
	W0114 03:12:24.753131   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:12:24.753216   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:12:24.776007   17797 logs.go:274] 0 containers: []
	W0114 03:12:24.776019   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:12:24.776027   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:12:24.776033   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:12:24.814310   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:12:24.814324   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:12:24.826600   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:12:24.826615   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:12:24.883597   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:12:24.883608   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:12:24.883616   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:12:24.898144   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:12:24.898159   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:12:26.950013   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051824218s)
	I0114 03:12:29.450616   17797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:12:29.580803   17797 kubeadm.go:631] restartCluster took 4m4.183105862s
	W0114 03:12:29.580944   17797 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0114 03:12:29.580971   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0114 03:12:29.997615   17797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 03:12:30.007458   17797 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 03:12:30.015192   17797 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 03:12:30.015252   17797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 03:12:30.023038   17797 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 03:12:30.023070   17797 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 03:12:30.071379   17797 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0114 03:12:30.071419   17797 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 03:12:30.367545   17797 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 03:12:30.367645   17797 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 03:12:30.367733   17797 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 03:12:30.593110   17797 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 03:12:30.593920   17797 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 03:12:30.600436   17797 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0114 03:12:30.662810   17797 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 03:12:30.684218   17797 out.go:204]   - Generating certificates and keys ...
	I0114 03:12:30.684303   17797 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 03:12:30.684376   17797 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 03:12:30.684455   17797 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 03:12:30.684546   17797 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0114 03:12:30.684611   17797 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0114 03:12:30.684652   17797 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0114 03:12:30.684707   17797 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0114 03:12:30.684770   17797 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0114 03:12:30.684870   17797 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 03:12:30.684964   17797 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 03:12:30.684999   17797 kubeadm.go:317] [certs] Using the existing "sa" key
	I0114 03:12:30.685065   17797 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 03:12:30.843377   17797 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 03:12:30.928140   17797 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 03:12:31.214160   17797 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 03:12:31.335523   17797 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 03:12:31.336091   17797 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 03:12:31.357661   17797 out.go:204]   - Booting up control plane ...
	I0114 03:12:31.357775   17797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 03:12:31.357888   17797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 03:12:31.357973   17797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 03:12:31.358071   17797 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 03:12:31.358261   17797 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 03:13:11.345976   17797 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 03:13:11.346450   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:13:11.346676   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:13:16.346910   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:13:16.347068   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:13:26.347820   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:13:26.347990   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:13:46.350683   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:13:46.350910   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:14:26.352893   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:14:26.353089   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:14:26.353097   17797 kubeadm.go:317] 
	I0114 03:14:26.353150   17797 kubeadm.go:317] Unfortunately, an error has occurred:
	I0114 03:14:26.353211   17797 kubeadm.go:317] 	timed out waiting for the condition
	I0114 03:14:26.353227   17797 kubeadm.go:317] 
	I0114 03:14:26.353264   17797 kubeadm.go:317] This error is likely caused by:
	I0114 03:14:26.353316   17797 kubeadm.go:317] 	- The kubelet is not running
	I0114 03:14:26.353467   17797 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0114 03:14:26.353487   17797 kubeadm.go:317] 
	I0114 03:14:26.353617   17797 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0114 03:14:26.353654   17797 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0114 03:14:26.353690   17797 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0114 03:14:26.353703   17797 kubeadm.go:317] 
	I0114 03:14:26.353781   17797 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0114 03:14:26.353853   17797 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0114 03:14:26.353931   17797 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0114 03:14:26.353973   17797 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0114 03:14:26.354029   17797 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0114 03:14:26.354052   17797 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0114 03:14:26.356073   17797 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0114 03:14:26.356189   17797 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0114 03:14:26.356310   17797 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 03:14:26.356373   17797 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0114 03:14:26.356424   17797 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0114 03:14:26.356561   17797 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0114 03:14:26.356585   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0114 03:14:26.772146   17797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 03:14:26.781892   17797 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 03:14:26.781951   17797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 03:14:26.789387   17797 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 03:14:26.789410   17797 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 03:14:26.837273   17797 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0114 03:14:26.837326   17797 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 03:14:27.133445   17797 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 03:14:27.133535   17797 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 03:14:27.133612   17797 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 03:14:27.360881   17797 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 03:14:27.361665   17797 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 03:14:27.368176   17797 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0114 03:14:27.424739   17797 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 03:14:27.446596   17797 out.go:204]   - Generating certificates and keys ...
	I0114 03:14:27.446672   17797 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 03:14:27.446774   17797 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 03:14:27.446899   17797 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 03:14:27.446963   17797 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0114 03:14:27.447106   17797 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0114 03:14:27.447152   17797 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0114 03:14:27.447243   17797 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0114 03:14:27.447300   17797 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0114 03:14:27.447366   17797 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 03:14:27.447447   17797 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 03:14:27.447557   17797 kubeadm.go:317] [certs] Using the existing "sa" key
	I0114 03:14:27.447606   17797 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 03:14:27.782675   17797 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 03:14:27.938458   17797 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 03:14:28.018189   17797 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 03:14:28.181641   17797 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 03:14:28.182361   17797 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 03:14:28.204801   17797 out.go:204]   - Booting up control plane ...
	I0114 03:14:28.205040   17797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 03:14:28.205184   17797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 03:14:28.205334   17797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 03:14:28.205466   17797 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 03:14:28.205731   17797 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 03:15:08.191630   17797 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 03:15:08.192165   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:15:08.192413   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:15:13.194308   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:15:13.194520   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:15:23.195684   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:15:23.195890   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:15:43.196954   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:15:43.197157   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:16:23.197857   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:16:23.198022   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:16:23.198036   17797 kubeadm.go:317] 
	I0114 03:16:23.198073   17797 kubeadm.go:317] Unfortunately, an error has occurred:
	I0114 03:16:23.198121   17797 kubeadm.go:317] 	timed out waiting for the condition
	I0114 03:16:23.198127   17797 kubeadm.go:317] 
	I0114 03:16:23.198166   17797 kubeadm.go:317] This error is likely caused by:
	I0114 03:16:23.198201   17797 kubeadm.go:317] 	- The kubelet is not running
	I0114 03:16:23.198294   17797 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0114 03:16:23.198303   17797 kubeadm.go:317] 
	I0114 03:16:23.198379   17797 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0114 03:16:23.198402   17797 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0114 03:16:23.198436   17797 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0114 03:16:23.198444   17797 kubeadm.go:317] 
	I0114 03:16:23.198522   17797 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0114 03:16:23.198596   17797 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0114 03:16:23.198665   17797 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0114 03:16:23.198704   17797 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0114 03:16:23.198763   17797 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0114 03:16:23.198792   17797 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0114 03:16:23.201563   17797 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0114 03:16:23.201674   17797 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0114 03:16:23.201763   17797 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 03:16:23.201838   17797 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0114 03:16:23.201898   17797 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0114 03:16:23.201933   17797 kubeadm.go:398] StartCluster complete in 7m57.833711298s
	I0114 03:16:23.202039   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:16:23.225134   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.225148   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:16:23.225229   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:16:23.247933   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.247947   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:16:23.248031   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:16:23.274185   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.274203   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:16:23.274302   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:16:23.299596   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.299609   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:16:23.299692   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:16:23.322788   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.322804   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:16:23.322895   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:16:23.346309   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.346322   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:16:23.346409   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:16:23.370261   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.370274   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:16:23.370358   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:16:23.395385   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.395400   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:16:23.395408   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:16:23.395421   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:16:23.434433   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:16:23.434450   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:16:23.447209   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:16:23.447224   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:16:23.504472   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:16:23.504484   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:16:23.504490   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:16:23.518654   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:16:23.518668   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:16:25.568851   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050156639s)
	W0114 03:16:25.568966   17797 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0114 03:16:25.568982   17797 out.go:239] * 
	* 
	W0114 03:16:25.569145   17797 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 03:16:25.569173   17797 out.go:239] * 
	* 
	W0114 03:16:25.569804   17797 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 03:16:25.612853   17797 out.go:177] 
	W0114 03:16:25.655104   17797 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 03:16:25.655198   17797 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0114 03:16:25.655267   17797 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0114 03:16:25.717989   17797 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-030235 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-030235
helpers_test.go:235: (dbg) docker inspect old-k8s-version-030235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942",
	        "Created": "2023-01-14T11:02:44.471910321Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T11:08:21.837144259Z",
	            "FinishedAt": "2023-01-14T11:08:18.899183667Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hostname",
	        "HostsPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hosts",
	        "LogPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942-json.log",
	        "Name": "/old-k8s-version-030235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-030235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-030235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408-init/diff:/var/lib/docker/overlay2/74c9e0d36b5b0c73e7df7f4bce3bd0c3d02cf9dc383bffd6fbcff44769e0e62a/diff:/var/lib/docker/overlay2/ba601a6c163e2d067928a6364b090a9785c3dd2470d90823ce10e62a47aa569f/diff:/var/lib/docker/overlay2/80b54fffffd853e7ba8f14b1c1ac90a8b75fb31aafab2d53fe628cb592a95844/diff:/var/lib/docker/overlay2/02213d03e53450db4a2d492831eba720749d97435157430d240b760477b64c78/diff:/var/lib/docker/overlay2/e3727b5662aa5fdeeef9053112ad90fb2f9aaecbfeeddefa3efb066881ae1677/diff:/var/lib/docker/overlay2/685adc0695be0cb9862d43898ceae6e6a36c3cc98f04bc25e314797bed3b1d95/diff:/var/lib/docker/overlay2/7e133e132419c5ad6565f89b3ecfdf2c9fa038e5b9c39fe81c1269cfb6bb0d22/diff:/var/lib/docker/overlay2/c4d27ebf7e050a3aee0acccdadb92fc9390befadef2b0b13b9ebe87a2af3ef50/diff:/var/lib/docker/overlay2/0f07a86eba9c199451031724816d33cb5d2e19c401514edd8c1e392fd795f1e1/diff:/var/lib/docker/overlay2/a51cfe
8ee6145a30d356888e940bfdda67bc55c29f3972b35ae93dd989943b1c/diff:/var/lib/docker/overlay2/b155ac1a426201afe2af9fba8a7ebbecd3d8271f8613d0f53dac7bb190bc977f/diff:/var/lib/docker/overlay2/7c5cec64dde89a12b95bb1a0bca411b06b69201cfdb3cc4b46cb87a5bcff9a7f/diff:/var/lib/docker/overlay2/dd54bb055fc70a41daa3f3e950f4bdadd925db2c588d7d831edb4cbb176d30c7/diff:/var/lib/docker/overlay2/f58b39c756189e32d5b9c66b5c3861eabf5ab01ebc6179fec7210d414762bf45/diff:/var/lib/docker/overlay2/6458e00e4b79399a4860e78a572cd21fd47cbca2a54d189f34bd4a438145a6f5/diff:/var/lib/docker/overlay2/66427e9f49ff5383f9f819513857efb87ee3f880df33a86ac46ebc140ff172ed/diff:/var/lib/docker/overlay2/33f03d40d23c6a829c43633ba96c4058fbf09a4cf912eb51e0ca23a65574b0a7/diff:/var/lib/docker/overlay2/e68584e2b5a5a18fbd6edeeba6d80fe43e2199775b520878ca842d463078a2d1/diff:/var/lib/docker/overlay2/a2bfe134a89cb821f2c8e5ec6b42888d30fac6a9ed1aa4853476bb33cfe2e157/diff:/var/lib/docker/overlay2/f55951d7e041b300f9842916d51648285b79860a132d032d3c23b80af7c280fa/diff:/var/lib/d
ocker/overlay2/76cb0b8d6987165c472c0c9d54491045539294d203577a4ed7fac7f7cbbf0322/diff:/var/lib/docker/overlay2/a8f6d057d4938258302dd54e9a2e99732b4a2ac5c869366e93983e3e8890d432/diff:/var/lib/docker/overlay2/16bf4a461f9fe0edba90225f752527e534469b1bfbeb5bca6315512786340bfe/diff:/var/lib/docker/overlay2/2d022a51ddd598853537ff8fbeca5b94beff9d5d7e6ca81ffe011aa35121268a/diff:/var/lib/docker/overlay2/e30d56ebfba93be441f305b1938dd2d0f847f649922524ebef1fbe3e4b3b4bf9/diff:/var/lib/docker/overlay2/12df07bd2576a7b97f383aa3fcb2535f75a901953859063d9b65944d2dd0b152/diff:/var/lib/docker/overlay2/79e70748fe1267851a900b8bca2ab4e0b34e8163714fc440602d9e0273c93421/diff:/var/lib/docker/overlay2/c4fa6441d4ff7ce1be2072a8f61c5c495ff1785d9fee891191262b893a6eff63/diff:/var/lib/docker/overlay2/748980353d2fab0e6498a85b0c558d9eb7f34703302b21298c310b98dcf4d6f9/diff:/var/lib/docker/overlay2/48f823bc2f4741841d95ac4706f52fe9d01883bce998d5c999bdc363c838b1ee/diff:/var/lib/docker/overlay2/5f4f42c0e92359fc7ea2cf540120bd09407fd1d8dee5b56896919b39d3e
70033/diff:/var/lib/docker/overlay2/4a4066d1d0f42bb48af787d9f9bd115bacffde91f4ca8c20648dad3b25f904b6/diff:/var/lib/docker/overlay2/5f1054f553934c922e4dffc5c3804a5825ed249f7df9c3da31e2081145c8749a/diff:/var/lib/docker/overlay2/a6fe8ece465ba51837f6a88e28c3b571b632f0b223900278ac4a5f5dc0577520/diff:/var/lib/docker/overlay2/ee3e9af6d65fe9d2da423711b90ee171fd35422619c22b802d5fead4f861d921/diff:/var/lib/docker/overlay2/b353b985af8b2f665218f5af5e89cb642745824e2c3b51bfe3aa58c801823c46/diff:/var/lib/docker/overlay2/4411168ee372991c59d386d2ec200449c718a5343f5efa545ad9552a5c349310/diff:/var/lib/docker/overlay2/eeb668637d75a5802fe62d8a71458c68195302676ff09eb1e973d633e24e8588/diff:/var/lib/docker/overlay2/67b1dd580c0c0e994c4fe1233fef817d2c085438c80485c1f2eec64392c7b709/diff:/var/lib/docker/overlay2/1ae992d82b2e0a4c2a667c7d0d9e243efda7ee206e17c862bf093fa976667cc3/diff:/var/lib/docker/overlay2/ab6d393733a7abd2a9bd5612a0cef5adc3cded30c596c212828a8475c9c29779/diff:/var/lib/docker/overlay2/c927272ea82dc6bb318adcf8eb94099eece7af
9df7f454ff921048ba7ce589d2/diff:/var/lib/docker/overlay2/722309d1402eda210190af6c69b6f9998aff66e78e5bbc972ae865d10f0474d7/diff:/var/lib/docker/overlay2/c8a4e498ea2b5c051ced01db75d10e4ed1619bd3acc28c000789b600f8a7e23b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-030235",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-030235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-030235",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96e0c70534d1f6a1812c15eb3499843abdb380deba02ee4637e8918b0f3daae3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54079"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54080"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54076"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54077"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54078"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/96e0c70534d1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-030235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d977528adcbe",
	                        "old-k8s-version-030235"
	                    ],
	                    "NetworkID": "ab958c8662819925836c350f1443c8060424291379d9dc2b6c89656fa5f7da2a",
	                    "EndpointID": "99ff9fa9f16b08cacf98f575b4464b9b756d4f1cf10c888cca45473adbdc8e4e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235: exit status 2 (396.479689ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-030235 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-030235 logs -n 25: (3.421600173s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p enable-default-cni-024325                      | enable-default-cni-024325 | jenkins | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
	| start   | -p kubenet-024325                                 | kubenet-024325            | jenkins | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:03 PST |
	|         | --memory=2048                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                                 |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                           |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                           |         |         |                     |                     |
	|         | --driver=docker                                   |                           |         |         |                     |                     |
	| delete  | -p calico-024326                                  | calico-024326             | jenkins | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
	| start   | -p old-k8s-version-030235                         | old-k8s-version-030235    | jenkins | v1.28.0 | 14 Jan 23 03:02 PST |                     |
	|         | --memory=2200                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                           |         |         |                     |                     |
	|         | --kvm-network=default                             |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                           |         |         |                     |                     |
	|         | --keep-context=false                              |                           |         |         |                     |                     |
	|         | --driver=docker                                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                           |         |         |                     |                     |
	| ssh     | -p kubenet-024325 pgrep -a                        | kubenet-024325            | jenkins | v1.28.0 | 14 Jan 23 03:03 PST | 14 Jan 23 03:03 PST |
	|         | kubelet                                           |                           |         |         |                     |                     |
	| delete  | -p kubenet-024325                                 | kubenet-024325            | jenkins | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:04 PST |
	| start   | -p no-preload-030433                              | no-preload-030433         | jenkins | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:05 PST |
	|         | --memory=2200                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                                 |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                           |         |         |                     |                     |
	|         | --driver=docker                                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-030433        | no-preload-030433         | jenkins | v1.28.0 | 14 Jan 23 03:05 PST | 14 Jan 23 03:05 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                           |         |         |                     |                     |
	| stop    | -p no-preload-030433                              | no-preload-030433         | jenkins | v1.28.0 | 14 Jan 23 03:05 PST | 14 Jan 23 03:05 PST |
	|         | --alsologtostderr -v=3                            |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-030433             | no-preload-030433         | jenkins | v1.28.0 | 14 Jan 23 03:05 PST | 14 Jan 23 03:05 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-030433                              | no-preload-030433         | jenkins | v1.28.0 | 14 Jan 23 03:05 PST | 14 Jan 23 03:10 PST |
	|         | --memory=2200                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                                 |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                           |         |         |                     |                     |
	|         | --driver=docker                                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-030235   | old-k8s-version-030235    | jenkins | v1.28.0 | 14 Jan 23 03:06 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-030235                         | old-k8s-version-030235    | jenkins | v1.28.0 | 14 Jan 23 03:08 PST | 14 Jan 23 03:08 PST |
	|         | --alsologtostderr -v=3                            |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-030235        | old-k8s-version-030235    | jenkins | v1.28.0 | 14 Jan 23 03:08 PST | 14 Jan 23 03:08 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-030235                         | old-k8s-version-030235    | jenkins | v1.28.0 | 14 Jan 23 03:08 PST |                     |
	|         | --memory=2200                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                           |         |         |                     |                     |
	|         | --kvm-network=default                             |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                           |         |         |                     |                     |
	|         | --keep-context=false                              |                           |         |         |                     |                     |
	|         | --driver=docker                                   |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                           |         |         |                     |                     |
	| ssh     | -p no-preload-030433 sudo                         | no-preload-030433         | jenkins | v1.28.0 | 14 Jan 23 03:11 PST | 14 Jan 23 03:11 PST |
	|         | crictl images -o json                             |                           |         |         |                     |                     |
	| pause   | -p no-preload-030433                              | no-preload-030433         | jenkins | v1.28.0 | 14 Jan 23 03:11 PST | 14 Jan 23 03:11 PST |
	|         | --alsologtostderr -v=1                            |                           |         |         |                     |                     |
	| unpause | -p no-preload-030433                              | no-preload-030433         | jenkins | v1.28.0 | 14 Jan 23 03:11 PST | 14 Jan 23 03:11 PST |
	|         | --alsologtostderr -v=1                            |                           |         |         |                     |                     |
	| delete  | -p no-preload-030433                              | no-preload-030433         | jenkins | v1.28.0 | 14 Jan 23 03:11 PST | 14 Jan 23 03:11 PST |
	| delete  | -p no-preload-030433                              | no-preload-030433         | jenkins | v1.28.0 | 14 Jan 23 03:11 PST | 14 Jan 23 03:11 PST |
	| start   | -p embed-certs-031128                             | embed-certs-031128        | jenkins | v1.28.0 | 14 Jan 23 03:11 PST | 14 Jan 23 03:12 PST |
	|         | --memory=2200                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-031128       | embed-certs-031128        | jenkins | v1.28.0 | 14 Jan 23 03:13 PST | 14 Jan 23 03:13 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                           |         |         |                     |                     |
	| stop    | -p embed-certs-031128                             | embed-certs-031128        | jenkins | v1.28.0 | 14 Jan 23 03:13 PST | 14 Jan 23 03:13 PST |
	|         | --alsologtostderr -v=3                            |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-031128            | embed-certs-031128        | jenkins | v1.28.0 | 14 Jan 23 03:13 PST | 14 Jan 23 03:13 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p embed-certs-031128                             | embed-certs-031128        | jenkins | v1.28.0 | 14 Jan 23 03:13 PST |                     |
	|         | --memory=2200                                     |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                           |         |         |                     |                     |
	|---------|---------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 03:13:14
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 03:13:14.107749   18551 out.go:296] Setting OutFile to fd 1 ...
	I0114 03:13:14.107913   18551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:13:14.107919   18551 out.go:309] Setting ErrFile to fd 2...
	I0114 03:13:14.107923   18551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:13:14.108035   18551 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 03:13:14.108519   18551 out.go:303] Setting JSON to false
	I0114 03:13:14.127082   18551 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4368,"bootTime":1673690426,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 03:13:14.127232   18551 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 03:13:14.148729   18551 out.go:177] * [embed-certs-031128] minikube v1.28.0 on Darwin 13.0.1
	I0114 03:13:14.192193   18551 notify.go:220] Checking for updates...
	I0114 03:13:14.192218   18551 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 03:13:14.213920   18551 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 03:13:14.236001   18551 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 03:13:14.258015   18551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 03:13:14.279977   18551 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	I0114 03:13:14.302384   18551 config.go:180] Loaded profile config "embed-certs-031128": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:13:14.303132   18551 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 03:13:14.364020   18551 docker.go:138] docker version: linux-20.10.21
	I0114 03:13:14.364178   18551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 03:13:14.504888   18551 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 11:13:14.41411411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 03:13:14.526825   18551 out.go:177] * Using the docker driver based on existing profile
	I0114 03:13:14.548478   18551 start.go:294] selected driver: docker
	I0114 03:13:14.548502   18551 start.go:838] validating driver "docker" against &{Name:embed-certs-031128 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-031128 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mo
untString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:13:14.548622   18551 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 03:13:14.552435   18551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 03:13:14.693827   18551 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 11:13:14.602518279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 03:13:14.693996   18551 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 03:13:14.694016   18551 cni.go:95] Creating CNI manager for ""
	I0114 03:13:14.694026   18551 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:13:14.694037   18551 start_flags.go:319] config:
	{Name:embed-certs-031128 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-031128 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:13:14.736394   18551 out.go:177] * Starting control plane node embed-certs-031128 in cluster embed-certs-031128
	I0114 03:13:14.757422   18551 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 03:13:14.778599   18551 out.go:177] * Pulling base image ...
	I0114 03:13:14.821507   18551 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 03:13:14.821570   18551 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 03:13:14.821592   18551 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 03:13:14.821604   18551 cache.go:57] Caching tarball of preloaded images
	I0114 03:13:14.821739   18551 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 03:13:14.821754   18551 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 03:13:14.822127   18551 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/embed-certs-031128/config.json ...
	I0114 03:13:14.877943   18551 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 03:13:14.877961   18551 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 03:13:14.877982   18551 cache.go:193] Successfully downloaded all kic artifacts
	I0114 03:13:14.878026   18551 start.go:364] acquiring machines lock for embed-certs-031128: {Name:mk21a7365c177c2c66d317b05a2e4be5ef834db4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 03:13:14.878111   18551 start.go:368] acquired machines lock for "embed-certs-031128" in 65.371µs
	I0114 03:13:14.878137   18551 start.go:96] Skipping create...Using existing machine configuration
	I0114 03:13:14.878148   18551 fix.go:55] fixHost starting: 
	I0114 03:13:14.878422   18551 cli_runner.go:164] Run: docker container inspect embed-certs-031128 --format={{.State.Status}}
	I0114 03:13:14.935518   18551 fix.go:103] recreateIfNeeded on embed-certs-031128: state=Stopped err=<nil>
	W0114 03:13:14.935557   18551 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 03:13:14.979157   18551 out.go:177] * Restarting existing docker container for "embed-certs-031128" ...
	I0114 03:13:11.345976   17797 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 03:13:11.346450   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:13:11.346676   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:13:15.000099   18551 cli_runner.go:164] Run: docker start embed-certs-031128
	I0114 03:13:15.341184   18551 cli_runner.go:164] Run: docker container inspect embed-certs-031128 --format={{.State.Status}}
	I0114 03:13:15.404309   18551 kic.go:426] container "embed-certs-031128" state is running.
	I0114 03:13:15.404910   18551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-031128
	I0114 03:13:15.469345   18551 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/embed-certs-031128/config.json ...
	I0114 03:13:15.469852   18551 machine.go:88] provisioning docker machine ...
	I0114 03:13:15.469880   18551 ubuntu.go:169] provisioning hostname "embed-certs-031128"
	I0114 03:13:15.469987   18551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-031128
	I0114 03:13:15.541443   18551 main.go:134] libmachine: Using SSH client type: native
	I0114 03:13:15.541671   18551 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54237 <nil> <nil>}
	I0114 03:13:15.541683   18551 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-031128 && echo "embed-certs-031128" | sudo tee /etc/hostname
	I0114 03:13:15.667192   18551 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-031128
	
	I0114 03:13:15.667298   18551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-031128
	I0114 03:13:15.730958   18551 main.go:134] libmachine: Using SSH client type: native
	I0114 03:13:15.731133   18551 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54237 <nil> <nil>}
	I0114 03:13:15.731150   18551 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-031128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-031128/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-031128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 03:13:15.846973   18551 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 03:13:15.847000   18551 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
	I0114 03:13:15.847018   18551 ubuntu.go:177] setting up certificates
	I0114 03:13:15.847025   18551 provision.go:83] configureAuth start
	I0114 03:13:15.847117   18551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-031128
	I0114 03:13:15.907337   18551 provision.go:138] copyHostCerts
	I0114 03:13:15.907454   18551 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
	I0114 03:13:15.907470   18551 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 03:13:15.907578   18551 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
	I0114 03:13:15.907798   18551 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
	I0114 03:13:15.907806   18551 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 03:13:15.907877   18551 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
	I0114 03:13:15.908053   18551 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
	I0114 03:13:15.908063   18551 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 03:13:15.908145   18551 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
	I0114 03:13:15.908276   18551 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.embed-certs-031128 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-031128]
	I0114 03:13:15.967831   18551 provision.go:172] copyRemoteCerts
	I0114 03:13:15.967912   18551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 03:13:15.967983   18551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-031128
	I0114 03:13:16.030154   18551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54237 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/embed-certs-031128/id_rsa Username:docker}
	I0114 03:13:16.117778   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0114 03:13:16.137665   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0114 03:13:16.158401   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 03:13:16.180996   18551 provision.go:86] duration metric: configureAuth took 333.955323ms
	I0114 03:13:16.181013   18551 ubuntu.go:193] setting minikube options for container-runtime
	I0114 03:13:16.181244   18551 config.go:180] Loaded profile config "embed-certs-031128": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:13:16.181329   18551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-031128
	I0114 03:13:16.246051   18551 main.go:134] libmachine: Using SSH client type: native
	I0114 03:13:16.246212   18551 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54237 <nil> <nil>}
	I0114 03:13:16.246224   18551 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 03:13:16.364825   18551 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 03:13:16.364841   18551 ubuntu.go:71] root file system type: overlay
	I0114 03:13:16.365051   18551 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 03:13:16.365157   18551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-031128
	I0114 03:13:16.429786   18551 main.go:134] libmachine: Using SSH client type: native
	I0114 03:13:16.429944   18551 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54237 <nil> <nil>}
	I0114 03:13:16.429995   18551 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 03:13:16.556882   18551 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 03:13:16.556994   18551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-031128
	I0114 03:13:16.614292   18551 main.go:134] libmachine: Using SSH client type: native
	I0114 03:13:16.614434   18551 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 54237 <nil> <nil>}
	I0114 03:13:16.614447   18551 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 03:13:16.738488   18551 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 03:13:16.738507   18551 machine.go:91] provisioned docker machine in 1.268637587s
	I0114 03:13:16.738518   18551 start.go:300] post-start starting for "embed-certs-031128" (driver="docker")
	I0114 03:13:16.738523   18551 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 03:13:16.738619   18551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 03:13:16.738685   18551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-031128
	I0114 03:13:16.796643   18551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54237 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/embed-certs-031128/id_rsa Username:docker}
	I0114 03:13:16.883941   18551 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 03:13:16.887513   18551 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 03:13:16.887527   18551 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 03:13:16.887539   18551 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 03:13:16.887546   18551 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 03:13:16.887559   18551 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
	I0114 03:13:16.887646   18551 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
	I0114 03:13:16.887803   18551 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
	I0114 03:13:16.887983   18551 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 03:13:16.895251   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
	I0114 03:13:16.913078   18551 start.go:303] post-start completed in 174.548056ms
	I0114 03:13:16.913183   18551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 03:13:16.913275   18551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-031128
	I0114 03:13:16.972640   18551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54237 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/embed-certs-031128/id_rsa Username:docker}
	I0114 03:13:17.058539   18551 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 03:13:17.063342   18551 fix.go:57] fixHost completed within 2.185174162s
	I0114 03:13:17.063362   18551 start.go:83] releasing machines lock for "embed-certs-031128", held for 2.18522804s
	I0114 03:13:17.063456   18551 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-031128
	I0114 03:13:17.121582   18551 ssh_runner.go:195] Run: cat /version.json
	I0114 03:13:17.121599   18551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 03:13:17.121657   18551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-031128
	I0114 03:13:17.121673   18551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-031128
	I0114 03:13:17.182209   18551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54237 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/embed-certs-031128/id_rsa Username:docker}
	I0114 03:13:17.182311   18551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54237 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/embed-certs-031128/id_rsa Username:docker}
	I0114 03:13:17.321699   18551 ssh_runner.go:195] Run: systemctl --version
	I0114 03:13:17.326803   18551 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 03:13:17.336607   18551 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 03:13:17.336673   18551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 03:13:17.348746   18551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 03:13:17.361961   18551 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 03:13:17.428265   18551 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 03:13:17.498107   18551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 03:13:17.565961   18551 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 03:13:17.813488   18551 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0114 03:13:17.881182   18551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 03:13:17.949916   18551 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0114 03:13:17.959692   18551 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 03:13:17.959776   18551 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0114 03:13:17.963571   18551 start.go:472] Will wait 60s for crictl version
	I0114 03:13:17.963622   18551 ssh_runner.go:195] Run: which crictl
	I0114 03:13:17.967387   18551 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 03:13:18.067129   18551 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0114 03:13:18.067229   18551 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 03:13:18.096506   18551 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 03:13:18.171988   18551 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0114 03:13:18.172253   18551 cli_runner.go:164] Run: docker exec -t embed-certs-031128 dig +short host.docker.internal
	I0114 03:13:18.286361   18551 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 03:13:18.286491   18551 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 03:13:18.290881   18551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 03:13:18.300568   18551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-031128
	I0114 03:13:18.358124   18551 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 03:13:18.358210   18551 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 03:13:18.382596   18551 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0114 03:13:18.382615   18551 docker.go:543] Images already preloaded, skipping extraction
	I0114 03:13:18.382706   18551 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 03:13:18.409075   18551 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0114 03:13:18.409101   18551 cache_images.go:84] Images are preloaded, skipping loading
	I0114 03:13:18.409221   18551 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 03:13:18.481083   18551 cni.go:95] Creating CNI manager for ""
	I0114 03:13:18.481098   18551 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:13:18.481114   18551 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 03:13:18.481129   18551 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-031128 NodeName:embed-certs-031128 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 03:13:18.481239   18551 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-031128"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 03:13:18.481322   18551 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-031128 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:embed-certs-031128 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 03:13:18.481392   18551 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 03:13:18.489261   18551 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 03:13:18.489340   18551 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 03:13:18.496632   18551 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0114 03:13:18.509522   18551 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 03:13:18.522300   18551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2040 bytes)
	I0114 03:13:18.535111   18551 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0114 03:13:18.539447   18551 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 03:13:18.549298   18551 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/embed-certs-031128 for IP: 192.168.67.2
	I0114 03:13:18.549414   18551 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
	I0114 03:13:18.549465   18551 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
	I0114 03:13:18.549571   18551 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/embed-certs-031128/client.key
	I0114 03:13:18.549645   18551 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/embed-certs-031128/apiserver.key.c7fa3a9e
	I0114 03:13:18.549698   18551 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/embed-certs-031128/proxy-client.key
	I0114 03:13:18.549936   18551 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
	W0114 03:13:18.549974   18551 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
	I0114 03:13:18.549986   18551 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 03:13:18.550022   18551 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
	I0114 03:13:18.550059   18551 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
	I0114 03:13:18.550095   18551 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
	I0114 03:13:18.550181   18551 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
	I0114 03:13:18.551877   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/embed-certs-031128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 03:13:18.569196   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/embed-certs-031128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 03:13:18.586656   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/embed-certs-031128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 03:13:18.603840   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/embed-certs-031128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0114 03:13:18.621017   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 03:13:18.638298   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 03:13:18.655455   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 03:13:18.672813   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 03:13:18.690189   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 03:13:18.707301   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
	I0114 03:13:18.724517   18551 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
	I0114 03:13:18.741794   18551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 03:13:18.754555   18551 ssh_runner.go:195] Run: openssl version
	I0114 03:13:18.759938   18551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
	I0114 03:13:18.768223   18551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
	I0114 03:13:18.772228   18551 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 03:13:18.772280   18551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
	I0114 03:13:18.777582   18551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 03:13:18.785418   18551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 03:13:18.793715   18551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:13:18.797856   18551 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:13:18.797911   18551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:13:18.803373   18551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 03:13:18.811079   18551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
	I0114 03:13:18.819688   18551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
	I0114 03:13:18.823561   18551 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 03:13:18.823609   18551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
	I0114 03:13:18.829181   18551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
	I0114 03:13:18.836716   18551 kubeadm.go:396] StartCluster: {Name:embed-certs-031128 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:embed-certs-031128 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:13:18.836837   18551 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 03:13:18.860278   18551 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 03:13:18.868044   18551 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 03:13:18.868059   18551 kubeadm.go:627] restartCluster start
	I0114 03:13:18.868113   18551 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 03:13:18.875271   18551 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:18.875352   18551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-031128
	I0114 03:13:18.934230   18551 kubeconfig.go:135] verify returned: extract IP: "embed-certs-031128" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 03:13:18.934407   18551 kubeconfig.go:146] "embed-certs-031128" context is missing from /Users/jenkins/minikube-integration/15642-1559/kubeconfig - will repair!
	I0114 03:13:18.934754   18551 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/kubeconfig: {Name:mkb6d1db5780815291441dc67b348461b9325651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:13:18.936151   18551 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 03:13:18.943981   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:18.944040   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:18.952274   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:16.346910   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:13:16.347068   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:13:19.153273   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:19.153356   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:19.162657   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:19.352580   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:19.352742   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:19.363737   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:19.552751   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:19.552922   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:19.564860   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:19.752743   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:19.752928   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:19.764219   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:19.952406   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:19.952505   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:19.962082   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:20.152583   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:20.152730   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:20.163308   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:20.353841   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:20.354004   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:20.365310   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:20.554451   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:20.554649   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:20.565753   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:20.752935   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:20.753008   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:20.762650   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:20.952795   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:20.952959   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:20.964109   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:21.153216   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:21.153434   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:21.164549   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:21.352690   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:21.352849   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:21.363984   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:21.554432   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:21.554629   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:21.565890   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:21.754459   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:21.754628   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:21.765817   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:21.952933   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:21.953086   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:21.963875   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:21.963886   18551 api_server.go:165] Checking apiserver status ...
	I0114 03:13:21.963945   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:13:21.972436   18551 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:21.972450   18551 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0114 03:13:21.972460   18551 kubeadm.go:1114] stopping kube-system containers ...
	I0114 03:13:21.972550   18551 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 03:13:21.997365   18551 docker.go:444] Stopping containers: [0474f21aee1d b4123080b9d6 caa6d2ba655f bb553a78e7f2 7e269163c4a3 2d66a09bb005 91f82024756e 537a882b50cb 3d726ee9ec33 6f1f05668e4a f01a39ce5d20 91ae7a52e903 a1aad603f03c 7c041e017432 0b8cf8674416 8101fcd3abb3]
	I0114 03:13:21.997465   18551 ssh_runner.go:195] Run: docker stop 0474f21aee1d b4123080b9d6 caa6d2ba655f bb553a78e7f2 7e269163c4a3 2d66a09bb005 91f82024756e 537a882b50cb 3d726ee9ec33 6f1f05668e4a f01a39ce5d20 91ae7a52e903 a1aad603f03c 7c041e017432 0b8cf8674416 8101fcd3abb3
	I0114 03:13:22.022948   18551 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 03:13:22.033606   18551 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 03:13:22.041278   18551 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan 14 11:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 14 11:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Jan 14 11:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan 14 11:11 /etc/kubernetes/scheduler.conf
	
	I0114 03:13:22.041340   18551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 03:13:22.049055   18551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 03:13:22.056503   18551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 03:13:22.063966   18551 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:22.064032   18551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0114 03:13:22.071388   18551 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 03:13:22.078811   18551 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:13:22.078870   18551 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0114 03:13:22.086033   18551 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 03:13:22.093793   18551 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 03:13:22.093803   18551 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:13:22.142822   18551 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:13:22.898427   18551 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:13:23.015175   18551 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:13:23.068078   18551 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:13:23.160215   18551 api_server.go:51] waiting for apiserver process to appear ...
	I0114 03:13:23.160294   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:13:23.726109   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:13:24.226143   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:13:24.726702   18551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:13:24.737417   18551 api_server.go:71] duration metric: took 1.577206934s to wait for apiserver process to appear ...
	I0114 03:13:24.737436   18551 api_server.go:87] waiting for apiserver healthz status ...
	I0114 03:13:24.737468   18551 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54241/healthz ...
	I0114 03:13:27.053409   18551 api_server.go:278] https://127.0.0.1:54241/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0114 03:13:27.053424   18551 api_server.go:102] status: https://127.0.0.1:54241/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0114 03:13:27.554177   18551 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54241/healthz ...
	I0114 03:13:27.561231   18551 api_server.go:278] https://127.0.0.1:54241/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 03:13:27.561245   18551 api_server.go:102] status: https://127.0.0.1:54241/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 03:13:28.054824   18551 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54241/healthz ...
	I0114 03:13:28.060859   18551 api_server.go:278] https://127.0.0.1:54241/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 03:13:28.060879   18551 api_server.go:102] status: https://127.0.0.1:54241/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 03:13:28.555369   18551 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54241/healthz ...
	I0114 03:13:28.562763   18551 api_server.go:278] https://127.0.0.1:54241/healthz returned 200:
	ok
	I0114 03:13:28.568925   18551 api_server.go:140] control plane version: v1.25.3
	I0114 03:13:28.568939   18551 api_server.go:130] duration metric: took 3.831469524s to wait for apiserver health ...
	I0114 03:13:28.568945   18551 cni.go:95] Creating CNI manager for ""
	I0114 03:13:28.568952   18551 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:13:28.568964   18551 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 03:13:28.575759   18551 system_pods.go:59] 8 kube-system pods found
	I0114 03:13:28.575775   18551 system_pods.go:61] "coredns-565d847f94-hm5tl" [98c93196-c2be-4847-b558-0fd7af72a8f6] Running
	I0114 03:13:28.575783   18551 system_pods.go:61] "etcd-embed-certs-031128" [e4f7d1fa-b9d4-4d87-9d99-cf0acd8d1536] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 03:13:28.575791   18551 system_pods.go:61] "kube-apiserver-embed-certs-031128" [f571cb73-0b28-418a-b28b-70088a359678] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0114 03:13:28.575796   18551 system_pods.go:61] "kube-controller-manager-embed-certs-031128" [fc874fb9-7ebb-4f86-a998-4d1f24c965ed] Running
	I0114 03:13:28.575813   18551 system_pods.go:61] "kube-proxy-pntks" [a69c8eb5-8ac3-4cda-85f4-fcd34a88c8a8] Running
	I0114 03:13:28.575822   18551 system_pods.go:61] "kube-scheduler-embed-certs-031128" [aeb59db9-ae61-49c7-a5af-7b16739ca077] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0114 03:13:28.575829   18551 system_pods.go:61] "metrics-server-5c8fd5cf8-lmjrc" [afd40462-3fb4-4985-b656-162083fefafc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0114 03:13:28.575835   18551 system_pods.go:61] "storage-provisioner" [a4ebbdd1-5eee-404f-90c9-b697aa35585d] Running
	I0114 03:13:28.575839   18551 system_pods.go:74] duration metric: took 6.870499ms to wait for pod list to return data ...
	I0114 03:13:28.575845   18551 node_conditions.go:102] verifying NodePressure condition ...
	I0114 03:13:28.578937   18551 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 03:13:28.578952   18551 node_conditions.go:123] node cpu capacity is 6
	I0114 03:13:28.578960   18551 node_conditions.go:105] duration metric: took 3.111666ms to run NodePressure ...
	I0114 03:13:28.578975   18551 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:13:28.724446   18551 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0114 03:13:28.729136   18551 kubeadm.go:778] kubelet initialised
	I0114 03:13:28.729147   18551 kubeadm.go:779] duration metric: took 4.687104ms waiting for restarted kubelet to initialise ...
	I0114 03:13:28.729155   18551 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 03:13:28.734153   18551 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-hm5tl" in "kube-system" namespace to be "Ready" ...
	I0114 03:13:28.738749   18551 pod_ready.go:92] pod "coredns-565d847f94-hm5tl" in "kube-system" namespace has status "Ready":"True"
	I0114 03:13:28.738758   18551 pod_ready.go:81] duration metric: took 4.592348ms waiting for pod "coredns-565d847f94-hm5tl" in "kube-system" namespace to be "Ready" ...
	I0114 03:13:28.738767   18551 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-031128" in "kube-system" namespace to be "Ready" ...
	I0114 03:13:26.347820   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:13:26.347990   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:13:30.750501   18551 pod_ready.go:102] pod "etcd-embed-certs-031128" in "kube-system" namespace has status "Ready":"False"
	I0114 03:13:32.751812   18551 pod_ready.go:102] pod "etcd-embed-certs-031128" in "kube-system" namespace has status "Ready":"False"
	I0114 03:13:35.249922   18551 pod_ready.go:102] pod "etcd-embed-certs-031128" in "kube-system" namespace has status "Ready":"False"
	I0114 03:13:37.253168   18551 pod_ready.go:102] pod "etcd-embed-certs-031128" in "kube-system" namespace has status "Ready":"False"
	I0114 03:13:39.751880   18551 pod_ready.go:102] pod "etcd-embed-certs-031128" in "kube-system" namespace has status "Ready":"False"
	I0114 03:13:41.752167   18551 pod_ready.go:102] pod "etcd-embed-certs-031128" in "kube-system" namespace has status "Ready":"False"
	I0114 03:13:44.252398   18551 pod_ready.go:92] pod "etcd-embed-certs-031128" in "kube-system" namespace has status "Ready":"True"
	I0114 03:13:44.252412   18551 pod_ready.go:81] duration metric: took 15.51353426s waiting for pod "etcd-embed-certs-031128" in "kube-system" namespace to be "Ready" ...
	I0114 03:13:44.252418   18551 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-031128" in "kube-system" namespace to be "Ready" ...
	I0114 03:13:44.256778   18551 pod_ready.go:92] pod "kube-apiserver-embed-certs-031128" in "kube-system" namespace has status "Ready":"True"
	I0114 03:13:44.256787   18551 pod_ready.go:81] duration metric: took 4.363722ms waiting for pod "kube-apiserver-embed-certs-031128" in "kube-system" namespace to be "Ready" ...
	I0114 03:13:44.256793   18551 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-031128" in "kube-system" namespace to be "Ready" ...
	I0114 03:13:44.261191   18551 pod_ready.go:92] pod "kube-controller-manager-embed-certs-031128" in "kube-system" namespace has status "Ready":"True"
	I0114 03:13:44.261200   18551 pod_ready.go:81] duration metric: took 4.402169ms waiting for pod "kube-controller-manager-embed-certs-031128" in "kube-system" namespace to be "Ready" ...
	I0114 03:13:44.261206   18551 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pntks" in "kube-system" namespace to be "Ready" ...
	I0114 03:13:44.265693   18551 pod_ready.go:92] pod "kube-proxy-pntks" in "kube-system" namespace has status "Ready":"True"
	I0114 03:13:44.265701   18551 pod_ready.go:81] duration metric: took 4.485406ms waiting for pod "kube-proxy-pntks" in "kube-system" namespace to be "Ready" ...
	I0114 03:13:44.265708   18551 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-031128" in "kube-system" namespace to be "Ready" ...
	I0114 03:13:44.270151   18551 pod_ready.go:92] pod "kube-scheduler-embed-certs-031128" in "kube-system" namespace has status "Ready":"True"
	I0114 03:13:44.270160   18551 pod_ready.go:81] duration metric: took 4.447116ms waiting for pod "kube-scheduler-embed-certs-031128" in "kube-system" namespace to be "Ready" ...
	I0114 03:13:44.270166   18551 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace to be "Ready" ...
	I0114 03:13:46.656738   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:13:48.656775   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:13:46.350683   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:13:46.350910   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:13:51.155565   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:13:53.655649   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:13:55.656823   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:13:57.658772   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:00.156064   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:02.158701   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:04.657073   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:06.657225   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:09.155950   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:11.158721   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:13.656665   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:16.155891   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:18.156367   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:20.156731   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:22.659612   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:26.352893   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:14:26.353089   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:14:26.353097   17797 kubeadm.go:317] 
	I0114 03:14:26.353150   17797 kubeadm.go:317] Unfortunately, an error has occurred:
	I0114 03:14:26.353211   17797 kubeadm.go:317] 	timed out waiting for the condition
	I0114 03:14:26.353227   17797 kubeadm.go:317] 
	I0114 03:14:26.353264   17797 kubeadm.go:317] This error is likely caused by:
	I0114 03:14:26.353316   17797 kubeadm.go:317] 	- The kubelet is not running
	I0114 03:14:26.353467   17797 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0114 03:14:26.353487   17797 kubeadm.go:317] 
	I0114 03:14:26.353617   17797 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0114 03:14:26.353654   17797 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0114 03:14:26.353690   17797 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0114 03:14:26.353703   17797 kubeadm.go:317] 
	I0114 03:14:26.353781   17797 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0114 03:14:26.353853   17797 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0114 03:14:26.353931   17797 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0114 03:14:26.353973   17797 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0114 03:14:26.354029   17797 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0114 03:14:26.354052   17797 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0114 03:14:26.356073   17797 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0114 03:14:26.356189   17797 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0114 03:14:26.356310   17797 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 03:14:26.356373   17797 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0114 03:14:26.356424   17797 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W0114 03:14:26.356561   17797 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0114 03:14:26.356585   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0114 03:14:26.772146   17797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 03:14:26.781892   17797 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 03:14:26.781951   17797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 03:14:26.789387   17797 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 03:14:26.789410   17797 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 03:14:26.837273   17797 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0114 03:14:26.837326   17797 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 03:14:27.133445   17797 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 03:14:27.133535   17797 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 03:14:27.133612   17797 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 03:14:27.360881   17797 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 03:14:27.361665   17797 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 03:14:27.368176   17797 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0114 03:14:27.424739   17797 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 03:14:27.446596   17797 out.go:204]   - Generating certificates and keys ...
	I0114 03:14:27.446672   17797 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 03:14:27.446774   17797 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 03:14:27.446899   17797 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0114 03:14:27.446963   17797 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0114 03:14:27.447106   17797 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0114 03:14:27.447152   17797 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0114 03:14:27.447243   17797 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0114 03:14:27.447300   17797 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0114 03:14:27.447366   17797 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0114 03:14:27.447447   17797 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0114 03:14:27.447557   17797 kubeadm.go:317] [certs] Using the existing "sa" key
	I0114 03:14:27.447606   17797 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 03:14:27.782675   17797 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 03:14:27.938458   17797 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 03:14:28.018189   17797 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 03:14:28.181641   17797 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 03:14:28.182361   17797 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 03:14:25.155719   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:27.655810   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:28.204801   17797 out.go:204]   - Booting up control plane ...
	I0114 03:14:28.205040   17797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 03:14:28.205184   17797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 03:14:28.205334   17797 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 03:14:28.205466   17797 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 03:14:28.205731   17797 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0114 03:14:29.657245   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:32.158268   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:34.158554   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:36.656922   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:38.657708   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:41.157246   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:43.157952   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:45.657383   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:48.156417   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:50.659535   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:53.156597   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:55.157121   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:57.656353   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:14:59.658706   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:02.156885   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:04.156983   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:06.656948   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:08.658635   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:08.191630   17797 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0114 03:15:08.192165   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:15:08.192413   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:15:11.155811   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:13.157057   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:13.194308   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:15:13.194520   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:15:15.658658   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:18.156970   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:20.658646   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:23.157120   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:23.195684   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:15:23.195890   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:15:25.158392   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:27.656790   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:30.156552   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:32.157291   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:34.158620   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:36.657525   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:38.657608   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:41.158753   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:43.160117   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:43.196954   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:15:43.197157   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:15:45.659636   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:48.156148   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:50.157142   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:52.157478   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:54.657874   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:57.157065   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:15:59.659591   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:16:02.156591   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:16:04.157514   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:16:06.657492   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:16:08.657906   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:16:11.156067   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:16:13.158549   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:16:15.657591   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:16:18.156005   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:16:20.156888   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:16:22.656367   18551 pod_ready.go:102] pod "metrics-server-5c8fd5cf8-lmjrc" in "kube-system" namespace has status "Ready":"False"
	I0114 03:16:23.197857   17797 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0114 03:16:23.198022   17797 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0114 03:16:23.198036   17797 kubeadm.go:317] 
	I0114 03:16:23.198073   17797 kubeadm.go:317] Unfortunately, an error has occurred:
	I0114 03:16:23.198121   17797 kubeadm.go:317] 	timed out waiting for the condition
	I0114 03:16:23.198127   17797 kubeadm.go:317] 
	I0114 03:16:23.198166   17797 kubeadm.go:317] This error is likely caused by:
	I0114 03:16:23.198201   17797 kubeadm.go:317] 	- The kubelet is not running
	I0114 03:16:23.198294   17797 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0114 03:16:23.198303   17797 kubeadm.go:317] 
	I0114 03:16:23.198379   17797 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0114 03:16:23.198402   17797 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0114 03:16:23.198436   17797 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0114 03:16:23.198444   17797 kubeadm.go:317] 
	I0114 03:16:23.198522   17797 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0114 03:16:23.198596   17797 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0114 03:16:23.198665   17797 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I0114 03:16:23.198704   17797 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I0114 03:16:23.198763   17797 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0114 03:16:23.198792   17797 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I0114 03:16:23.201563   17797 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0114 03:16:23.201674   17797 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
	I0114 03:16:23.201763   17797 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0114 03:16:23.201838   17797 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0114 03:16:23.201898   17797 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0114 03:16:23.201933   17797 kubeadm.go:398] StartCluster complete in 7m57.833711298s
	I0114 03:16:23.202039   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0114 03:16:23.225134   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.225148   17797 logs.go:276] No container was found matching "kube-apiserver"
	I0114 03:16:23.225229   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0114 03:16:23.247933   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.247947   17797 logs.go:276] No container was found matching "etcd"
	I0114 03:16:23.248031   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0114 03:16:23.274185   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.274203   17797 logs.go:276] No container was found matching "coredns"
	I0114 03:16:23.274302   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0114 03:16:23.299596   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.299609   17797 logs.go:276] No container was found matching "kube-scheduler"
	I0114 03:16:23.299692   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0114 03:16:23.322788   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.322804   17797 logs.go:276] No container was found matching "kube-proxy"
	I0114 03:16:23.322895   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0114 03:16:23.346309   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.346322   17797 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0114 03:16:23.346409   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0114 03:16:23.370261   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.370274   17797 logs.go:276] No container was found matching "storage-provisioner"
	I0114 03:16:23.370358   17797 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0114 03:16:23.395385   17797 logs.go:274] 0 containers: []
	W0114 03:16:23.395400   17797 logs.go:276] No container was found matching "kube-controller-manager"
	I0114 03:16:23.395408   17797 logs.go:123] Gathering logs for kubelet ...
	I0114 03:16:23.395421   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0114 03:16:23.434433   17797 logs.go:123] Gathering logs for dmesg ...
	I0114 03:16:23.434450   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0114 03:16:23.447209   17797 logs.go:123] Gathering logs for describe nodes ...
	I0114 03:16:23.447224   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0114 03:16:23.504472   17797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0114 03:16:23.504484   17797 logs.go:123] Gathering logs for Docker ...
	I0114 03:16:23.504490   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0114 03:16:23.518654   17797 logs.go:123] Gathering logs for container status ...
	I0114 03:16:23.518668   17797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0114 03:16:25.568851   17797 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050156639s)
	W0114 03:16:25.568966   17797 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0114 03:16:25.568982   17797 out.go:239] * 
	W0114 03:16:25.569145   17797 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 03:16:25.569173   17797 out.go:239] * 
	W0114 03:16:25.569804   17797 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 03:16:25.612853   17797 out.go:177] 
	W0114 03:16:25.655104   17797 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.21. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0114 03:16:25.655198   17797 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0114 03:16:25.655267   17797 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0114 03:16:25.717989   17797 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-14 11:08:21 UTC, end at Sat 2023-01-14 11:16:27 UTC. --
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: Stopping Docker Application Container Engine...
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[129]: time="2023-01-14T11:08:24.399807470Z" level=info msg="Processing signal 'terminated'"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[129]: time="2023-01-14T11:08:24.400559208Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[129]: time="2023-01-14T11:08:24.400776500Z" level=info msg="Daemon shutdown complete"
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: docker.service: Succeeded.
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: Stopped Docker Application Container Engine.
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: Starting Docker Application Container Engine...
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.449671087Z" level=info msg="Starting up"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.451369542Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.451410150Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.451426946Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.451434106Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.452532702Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.452614323Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.452656109Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.452668984Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.456504219Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.460477786Z" level=info msg="Loading containers: start."
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.537877352Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.569431630Z" level=info msg="Loading containers: done."
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.577586702Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.577652554Z" level=info msg="Daemon has completed initialization"
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: Started Docker Application Container Engine.
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.602452785Z" level=info msg="API listen on [::]:2376"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.605161476Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-01-14T11:16:29Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  11:16:29 up  1:15,  0 users,  load average: 0.87, 0.83, 1.05
	Linux old-k8s-version-030235 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-14 11:08:21 UTC, end at Sat 2023-01-14 11:16:29 UTC. --
	Jan 14 11:16:27 old-k8s-version-030235 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 14 11:16:28 old-k8s-version-030235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Jan 14 11:16:28 old-k8s-version-030235 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 14 11:16:28 old-k8s-version-030235 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 14 11:16:28 old-k8s-version-030235 kubelet[14612]: I0114 11:16:28.727892   14612 server.go:410] Version: v1.16.0
	Jan 14 11:16:28 old-k8s-version-030235 kubelet[14612]: I0114 11:16:28.728094   14612 plugins.go:100] No cloud provider specified.
	Jan 14 11:16:28 old-k8s-version-030235 kubelet[14612]: I0114 11:16:28.728106   14612 server.go:773] Client rotation is on, will bootstrap in background
	Jan 14 11:16:28 old-k8s-version-030235 kubelet[14612]: I0114 11:16:28.730095   14612 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 14 11:16:28 old-k8s-version-030235 kubelet[14612]: W0114 11:16:28.730780   14612 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 14 11:16:28 old-k8s-version-030235 kubelet[14612]: W0114 11:16:28.730850   14612 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 14 11:16:28 old-k8s-version-030235 kubelet[14612]: F0114 11:16:28.730876   14612 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 14 11:16:28 old-k8s-version-030235 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 14 11:16:28 old-k8s-version-030235 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 14 11:16:29 old-k8s-version-030235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Jan 14 11:16:29 old-k8s-version-030235 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 14 11:16:29 old-k8s-version-030235 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 14 11:16:29 old-k8s-version-030235 kubelet[14644]: I0114 11:16:29.476431   14644 server.go:410] Version: v1.16.0
	Jan 14 11:16:29 old-k8s-version-030235 kubelet[14644]: I0114 11:16:29.476711   14644 plugins.go:100] No cloud provider specified.
	Jan 14 11:16:29 old-k8s-version-030235 kubelet[14644]: I0114 11:16:29.476722   14644 server.go:773] Client rotation is on, will bootstrap in background
	Jan 14 11:16:29 old-k8s-version-030235 kubelet[14644]: I0114 11:16:29.479918   14644 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 14 11:16:29 old-k8s-version-030235 kubelet[14644]: W0114 11:16:29.481263   14644 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 14 11:16:29 old-k8s-version-030235 kubelet[14644]: W0114 11:16:29.481333   14644 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 14 11:16:29 old-k8s-version-030235 kubelet[14644]: F0114 11:16:29.481360   14644 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 14 11:16:29 old-k8s-version-030235 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 14 11:16:29 old-k8s-version-030235 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 03:16:29.406297   18891 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-030235 -n old-k8s-version-030235
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-030235 -n old-k8s-version-030235: exit status 2 (403.61359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-030235" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (489.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:16:53.148162    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:17:04.658776    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:17:07.571408    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:17:11.849942    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:17:46.607637    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:17:58.410472    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:18:15.044393    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:18:15.069161    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:18:59.157819    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 03:19:00.519819    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:19:09.655940    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:19:19.840169    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:19:27.551794    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:20:22.225921    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 03:20:23.617747    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:20:31.230816    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:20:41.629653    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:20:54.867465    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:20:58.922491    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:21:01.473288    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:22:07.585672    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:22:11.864874    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:22:17.914051    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:22:46.622689    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:22:58.425513    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:23:15.058765    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:23:30.642867    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:23:34.913606    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:23:59.172505    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:24:00.534447    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:24:19.855291    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:24:27.567543    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:24:38.113240    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:25:41.631379    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:25:42.917372    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:25:54.873039    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-030235 -n old-k8s-version-030235
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-030235 -n old-k8s-version-030235: exit status 2 (427.922457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-030235" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-030235
helpers_test.go:235: (dbg) docker inspect old-k8s-version-030235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942",
	        "Created": "2023-01-14T11:02:44.471910321Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T11:08:21.837144259Z",
	            "FinishedAt": "2023-01-14T11:08:18.899183667Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hostname",
	        "HostsPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hosts",
	        "LogPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942-json.log",
	        "Name": "/old-k8s-version-030235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-030235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-030235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408-init/diff:/var/lib/docker/overlay2/74c9e0d36b5b0c73e7df7f4bce3bd0c3d02cf9dc383bffd6fbcff44769e0e62a/diff:/var/lib/docker/overlay2/ba601a6c163e2d067928a6364b090a9785c3dd2470d90823ce10e62a47aa569f/diff:/var/lib/docker/overlay2/80b54fffffd853e7ba8f14b1c1ac90a8b75fb31aafab2d53fe628cb592a95844/diff:/var/lib/docker/overlay2/02213d03e53450db4a2d492831eba720749d97435157430d240b760477b64c78/diff:/var/lib/docker/overlay2/e3727b5662aa5fdeeef9053112ad90fb2f9aaecbfeeddefa3efb066881ae1677/diff:/var/lib/docker/overlay2/685adc0695be0cb9862d43898ceae6e6a36c3cc98f04bc25e314797bed3b1d95/diff:/var/lib/docker/overlay2/7e133e132419c5ad6565f89b3ecfdf2c9fa038e5b9c39fe81c1269cfb6bb0d22/diff:/var/lib/docker/overlay2/c4d27ebf7e050a3aee0acccdadb92fc9390befadef2b0b13b9ebe87a2af3ef50/diff:/var/lib/docker/overlay2/0f07a86eba9c199451031724816d33cb5d2e19c401514edd8c1e392fd795f1e1/diff:/var/lib/docker/overlay2/a51cfe
8ee6145a30d356888e940bfdda67bc55c29f3972b35ae93dd989943b1c/diff:/var/lib/docker/overlay2/b155ac1a426201afe2af9fba8a7ebbecd3d8271f8613d0f53dac7bb190bc977f/diff:/var/lib/docker/overlay2/7c5cec64dde89a12b95bb1a0bca411b06b69201cfdb3cc4b46cb87a5bcff9a7f/diff:/var/lib/docker/overlay2/dd54bb055fc70a41daa3f3e950f4bdadd925db2c588d7d831edb4cbb176d30c7/diff:/var/lib/docker/overlay2/f58b39c756189e32d5b9c66b5c3861eabf5ab01ebc6179fec7210d414762bf45/diff:/var/lib/docker/overlay2/6458e00e4b79399a4860e78a572cd21fd47cbca2a54d189f34bd4a438145a6f5/diff:/var/lib/docker/overlay2/66427e9f49ff5383f9f819513857efb87ee3f880df33a86ac46ebc140ff172ed/diff:/var/lib/docker/overlay2/33f03d40d23c6a829c43633ba96c4058fbf09a4cf912eb51e0ca23a65574b0a7/diff:/var/lib/docker/overlay2/e68584e2b5a5a18fbd6edeeba6d80fe43e2199775b520878ca842d463078a2d1/diff:/var/lib/docker/overlay2/a2bfe134a89cb821f2c8e5ec6b42888d30fac6a9ed1aa4853476bb33cfe2e157/diff:/var/lib/docker/overlay2/f55951d7e041b300f9842916d51648285b79860a132d032d3c23b80af7c280fa/diff:/var/lib/d
ocker/overlay2/76cb0b8d6987165c472c0c9d54491045539294d203577a4ed7fac7f7cbbf0322/diff:/var/lib/docker/overlay2/a8f6d057d4938258302dd54e9a2e99732b4a2ac5c869366e93983e3e8890d432/diff:/var/lib/docker/overlay2/16bf4a461f9fe0edba90225f752527e534469b1bfbeb5bca6315512786340bfe/diff:/var/lib/docker/overlay2/2d022a51ddd598853537ff8fbeca5b94beff9d5d7e6ca81ffe011aa35121268a/diff:/var/lib/docker/overlay2/e30d56ebfba93be441f305b1938dd2d0f847f649922524ebef1fbe3e4b3b4bf9/diff:/var/lib/docker/overlay2/12df07bd2576a7b97f383aa3fcb2535f75a901953859063d9b65944d2dd0b152/diff:/var/lib/docker/overlay2/79e70748fe1267851a900b8bca2ab4e0b34e8163714fc440602d9e0273c93421/diff:/var/lib/docker/overlay2/c4fa6441d4ff7ce1be2072a8f61c5c495ff1785d9fee891191262b893a6eff63/diff:/var/lib/docker/overlay2/748980353d2fab0e6498a85b0c558d9eb7f34703302b21298c310b98dcf4d6f9/diff:/var/lib/docker/overlay2/48f823bc2f4741841d95ac4706f52fe9d01883bce998d5c999bdc363c838b1ee/diff:/var/lib/docker/overlay2/5f4f42c0e92359fc7ea2cf540120bd09407fd1d8dee5b56896919b39d3e
70033/diff:/var/lib/docker/overlay2/4a4066d1d0f42bb48af787d9f9bd115bacffde91f4ca8c20648dad3b25f904b6/diff:/var/lib/docker/overlay2/5f1054f553934c922e4dffc5c3804a5825ed249f7df9c3da31e2081145c8749a/diff:/var/lib/docker/overlay2/a6fe8ece465ba51837f6a88e28c3b571b632f0b223900278ac4a5f5dc0577520/diff:/var/lib/docker/overlay2/ee3e9af6d65fe9d2da423711b90ee171fd35422619c22b802d5fead4f861d921/diff:/var/lib/docker/overlay2/b353b985af8b2f665218f5af5e89cb642745824e2c3b51bfe3aa58c801823c46/diff:/var/lib/docker/overlay2/4411168ee372991c59d386d2ec200449c718a5343f5efa545ad9552a5c349310/diff:/var/lib/docker/overlay2/eeb668637d75a5802fe62d8a71458c68195302676ff09eb1e973d633e24e8588/diff:/var/lib/docker/overlay2/67b1dd580c0c0e994c4fe1233fef817d2c085438c80485c1f2eec64392c7b709/diff:/var/lib/docker/overlay2/1ae992d82b2e0a4c2a667c7d0d9e243efda7ee206e17c862bf093fa976667cc3/diff:/var/lib/docker/overlay2/ab6d393733a7abd2a9bd5612a0cef5adc3cded30c596c212828a8475c9c29779/diff:/var/lib/docker/overlay2/c927272ea82dc6bb318adcf8eb94099eece7af
9df7f454ff921048ba7ce589d2/diff:/var/lib/docker/overlay2/722309d1402eda210190af6c69b6f9998aff66e78e5bbc972ae865d10f0474d7/diff:/var/lib/docker/overlay2/c8a4e498ea2b5c051ced01db75d10e4ed1619bd3acc28c000789b600f8a7e23b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-030235",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-030235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-030235",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96e0c70534d1f6a1812c15eb3499843abdb380deba02ee4637e8918b0f3daae3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54079"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54080"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54076"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54077"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54078"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/96e0c70534d1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-030235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d977528adcbe",
	                        "old-k8s-version-030235"
	                    ],
	                    "NetworkID": "ab958c8662819925836c350f1443c8060424291379d9dc2b6c89656fa5f7da2a",
	                    "EndpointID": "99ff9fa9f16b08cacf98f575b4464b9b756d4f1cf10c888cca45473adbdc8e4e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235: exit status 2 (406.245327ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-030235 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-030235 logs -n 25: (3.816630437s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p no-preload-030433                                       | no-preload-030433            | jenkins | v1.28.0 | 14 Jan 23 03:11 PST | 14 Jan 23 03:11 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p no-preload-030433                                       | no-preload-030433            | jenkins | v1.28.0 | 14 Jan 23 03:11 PST | 14 Jan 23 03:11 PST |
	| delete  | -p no-preload-030433                                       | no-preload-030433            | jenkins | v1.28.0 | 14 Jan 23 03:11 PST | 14 Jan 23 03:11 PST |
	| start   | -p embed-certs-031128                                      | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:11 PST | 14 Jan 23 03:12 PST |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-031128                | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:13 PST | 14 Jan 23 03:13 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p embed-certs-031128                                      | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:13 PST | 14 Jan 23 03:13 PST |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-031128                     | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:13 PST | 14 Jan 23 03:13 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p embed-certs-031128                                      | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:13 PST | 14 Jan 23 03:18 PST |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-031128 sudo                                 | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:18 PST | 14 Jan 23 03:18 PST |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p embed-certs-031128                                      | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:18 PST | 14 Jan 23 03:18 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-031128                                      | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:18 PST | 14 Jan 23 03:18 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-031128                                      | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:18 PST | 14 Jan 23 03:18 PST |
	| delete  | -p embed-certs-031128                                      | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:18 PST | 14 Jan 23 03:18 PST |
	| delete  | -p                                                         | disable-driver-mounts-031842 | jenkins | v1.28.0 | 14 Jan 23 03:18 PST | 14 Jan 23 03:18 PST |
	|         | disable-driver-mounts-031842                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:18 PST | 14 Jan 23 03:19 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:19 PST | 14 Jan 23 03:19 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:19 PST | 14 Jan 23 03:20 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-031843           | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:20 PST | 14 Jan 23 03:20 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:20 PST | 14 Jan 23 03:25 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:25 PST | 14 Jan 23 03:25 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                              |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:25 PST | 14 Jan 23 03:25 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:25 PST | 14 Jan 23 03:25 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:25 PST | 14 Jan 23 03:25 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:25 PST | 14 Jan 23 03:25 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	| start   | -p newest-cni-032535 --memory=2200 --alsologtostderr       | newest-cni-032535            | jenkins | v1.28.0 | 14 Jan 23 03:25 PST |                     |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 03:25:35
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 03:25:35.393746   20188 out.go:296] Setting OutFile to fd 1 ...
	I0114 03:25:35.393937   20188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:25:35.393944   20188 out.go:309] Setting ErrFile to fd 2...
	I0114 03:25:35.393948   20188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:25:35.394059   20188 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 03:25:35.394640   20188 out.go:303] Setting JSON to false
	I0114 03:25:35.413375   20188 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5109,"bootTime":1673690426,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 03:25:35.413494   20188 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 03:25:35.435365   20188 out.go:177] * [newest-cni-032535] minikube v1.28.0 on Darwin 13.0.1
	I0114 03:25:35.478014   20188 notify.go:220] Checking for updates...
	I0114 03:25:35.478021   20188 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 03:25:35.499909   20188 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 03:25:35.521938   20188 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 03:25:35.543993   20188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 03:25:35.566000   20188 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	I0114 03:25:35.588693   20188 config.go:180] Loaded profile config "old-k8s-version-030235": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0114 03:25:35.588775   20188 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 03:25:35.698679   20188 docker.go:138] docker version: linux-20.10.21
	I0114 03:25:35.698816   20188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 03:25:35.841241   20188 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 11:25:35.74910706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 03:25:35.884915   20188 out.go:177] * Using the docker driver based on user configuration
	I0114 03:25:35.906744   20188 start.go:294] selected driver: docker
	I0114 03:25:35.906781   20188 start.go:838] validating driver "docker" against <nil>
	I0114 03:25:35.906807   20188 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 03:25:35.910662   20188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 03:25:36.056459   20188 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 11:25:35.962864161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 03:25:36.056577   20188 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0114 03:25:36.056601   20188 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0114 03:25:36.056745   20188 start_flags.go:936] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0114 03:25:36.100088   20188 out.go:177] * Using Docker Desktop driver with root privileges
	I0114 03:25:36.122910   20188 cni.go:95] Creating CNI manager for ""
	I0114 03:25:36.122967   20188 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:25:36.122983   20188 start_flags.go:319] config:
	{Name:newest-cni-032535 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-032535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:25:36.144159   20188 out.go:177] * Starting control plane node newest-cni-032535 in cluster newest-cni-032535
	I0114 03:25:36.186305   20188 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 03:25:36.229054   20188 out.go:177] * Pulling base image ...
	I0114 03:25:36.250291   20188 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 03:25:36.250382   20188 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 03:25:36.250384   20188 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 03:25:36.250426   20188 cache.go:57] Caching tarball of preloaded images
	I0114 03:25:36.250654   20188 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 03:25:36.250679   20188 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 03:25:36.251780   20188 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/config.json ...
	I0114 03:25:36.251949   20188 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/config.json: {Name:mk1b8b8623a8abebe1b04501fd0563d4b4e5cbb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:25:36.307092   20188 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 03:25:36.307112   20188 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 03:25:36.307132   20188 cache.go:193] Successfully downloaded all kic artifacts
	I0114 03:25:36.307174   20188 start.go:364] acquiring machines lock for newest-cni-032535: {Name:mkd4f8cc2b2c691682dbced0ece05c9d995f481f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 03:25:36.307343   20188 start.go:368] acquired machines lock for "newest-cni-032535" in 158.031µs
	I0114 03:25:36.307375   20188 start.go:93] Provisioning new machine with config: &{Name:newest-cni-032535 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-032535 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 03:25:36.307466   20188 start.go:125] createHost starting for "" (driver="docker")
	I0114 03:25:36.329374   20188 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0114 03:25:36.329724   20188 start.go:159] libmachine.API.Create for "newest-cni-032535" (driver="docker")
	I0114 03:25:36.329783   20188 client.go:168] LocalClient.Create starting
	I0114 03:25:36.329977   20188 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem
	I0114 03:25:36.330057   20188 main.go:134] libmachine: Decoding PEM data...
	I0114 03:25:36.330086   20188 main.go:134] libmachine: Parsing certificate...
	I0114 03:25:36.330199   20188 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem
	I0114 03:25:36.330270   20188 main.go:134] libmachine: Decoding PEM data...
	I0114 03:25:36.330290   20188 main.go:134] libmachine: Parsing certificate...
	I0114 03:25:36.351457   20188 cli_runner.go:164] Run: docker network inspect newest-cni-032535 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0114 03:25:36.408263   20188 cli_runner.go:211] docker network inspect newest-cni-032535 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0114 03:25:36.408381   20188 network_create.go:280] running [docker network inspect newest-cni-032535] to gather additional debugging logs...
	I0114 03:25:36.408402   20188 cli_runner.go:164] Run: docker network inspect newest-cni-032535
	W0114 03:25:36.463395   20188 cli_runner.go:211] docker network inspect newest-cni-032535 returned with exit code 1
	I0114 03:25:36.463429   20188 network_create.go:283] error running [docker network inspect newest-cni-032535]: docker network inspect newest-cni-032535: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-032535
	I0114 03:25:36.463445   20188 network_create.go:285] output of [docker network inspect newest-cni-032535]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-032535
	
	** /stderr **
	I0114 03:25:36.463548   20188 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0114 03:25:36.519445   20188 network.go:277] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0009c20b0] misses:0}
	I0114 03:25:36.519487   20188 network.go:210] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 03:25:36.519500   20188 network_create.go:123] attempt to create docker network newest-cni-032535 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0114 03:25:36.519598   20188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-032535 newest-cni-032535
	W0114 03:25:36.573802   20188 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-032535 newest-cni-032535 returned with exit code 1
	W0114 03:25:36.573848   20188 network_create.go:115] failed to create docker network newest-cni-032535 192.168.49.0/24, will retry: subnet is taken
	I0114 03:25:36.574135   20188 network.go:268] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009c20b0] amended:false}} dirty:map[] misses:0}
	I0114 03:25:36.574155   20188 network.go:213] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 03:25:36.574372   20188 network.go:277] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009c20b0] amended:true}} dirty:map[192.168.49.0:0xc0009c20b0 192.168.58.0:0xc000129440] misses:0}
	I0114 03:25:36.574387   20188 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 03:25:36.574400   20188 network_create.go:123] attempt to create docker network newest-cni-032535 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0114 03:25:36.574492   20188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-032535 newest-cni-032535
	W0114 03:25:36.629191   20188 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-032535 newest-cni-032535 returned with exit code 1
	W0114 03:25:36.629228   20188 network_create.go:115] failed to create docker network newest-cni-032535 192.168.58.0/24, will retry: subnet is taken
	I0114 03:25:36.629485   20188 network.go:268] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009c20b0] amended:true}} dirty:map[192.168.49.0:0xc0009c20b0 192.168.58.0:0xc000129440] misses:1}
	I0114 03:25:36.629505   20188 network.go:213] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 03:25:36.629741   20188 network.go:277] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009c20b0] amended:true}} dirty:map[192.168.49.0:0xc0009c20b0 192.168.58.0:0xc000129440 192.168.67.0:0xc000a14168] misses:1}
	I0114 03:25:36.629756   20188 network.go:210] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 03:25:36.629763   20188 network_create.go:123] attempt to create docker network newest-cni-032535 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0114 03:25:36.629848   20188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-032535 newest-cni-032535
	I0114 03:25:36.720695   20188 network_create.go:107] docker network newest-cni-032535 192.168.67.0/24 created
	I0114 03:25:36.720727   20188 kic.go:117] calculated static IP "192.168.67.2" for the "newest-cni-032535" container
	I0114 03:25:36.720866   20188 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0114 03:25:36.778837   20188 cli_runner.go:164] Run: docker volume create newest-cni-032535 --label name.minikube.sigs.k8s.io=newest-cni-032535 --label created_by.minikube.sigs.k8s.io=true
	I0114 03:25:36.835048   20188 oci.go:103] Successfully created a docker volume newest-cni-032535
	I0114 03:25:36.835209   20188 cli_runner.go:164] Run: docker run --rm --name newest-cni-032535-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-032535 --entrypoint /usr/bin/test -v newest-cni-032535:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -d /var/lib
	I0114 03:25:37.270005   20188 oci.go:107] Successfully prepared a docker volume newest-cni-032535
	I0114 03:25:37.270037   20188 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 03:25:37.270052   20188 kic.go:190] Starting extracting preloaded images to volume ...
	I0114 03:25:37.270169   20188 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-032535:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir
	I0114 03:25:43.681721   20188 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-032535:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c -I lz4 -xf /preloaded.tar -C /extractDir: (6.411395057s)
	I0114 03:25:43.681748   20188 kic.go:199] duration metric: took 6.411620 seconds to extract preloaded images to volume
	I0114 03:25:43.681896   20188 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0114 03:25:43.825321   20188 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-032535 --name newest-cni-032535 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-032535 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-032535 --network newest-cni-032535 --ip 192.168.67.2 --volume newest-cni-032535:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c
	I0114 03:25:44.248002   20188 cli_runner.go:164] Run: docker container inspect newest-cni-032535 --format={{.State.Running}}
	I0114 03:25:44.313282   20188 cli_runner.go:164] Run: docker container inspect newest-cni-032535 --format={{.State.Status}}
	I0114 03:25:44.378717   20188 cli_runner.go:164] Run: docker exec newest-cni-032535 stat /var/lib/dpkg/alternatives/iptables
	I0114 03:25:44.488901   20188 oci.go:144] the created container "newest-cni-032535" has a running status.
	I0114 03:25:44.488946   20188 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa...
	I0114 03:25:44.561216   20188 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0114 03:25:44.673302   20188 cli_runner.go:164] Run: docker container inspect newest-cni-032535 --format={{.State.Status}}
	I0114 03:25:44.730937   20188 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0114 03:25:44.730957   20188 kic_runner.go:114] Args: [docker exec --privileged newest-cni-032535 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0114 03:25:44.836381   20188 cli_runner.go:164] Run: docker container inspect newest-cni-032535 --format={{.State.Status}}
	I0114 03:25:44.894256   20188 machine.go:88] provisioning docker machine ...
	I0114 03:25:44.894295   20188 ubuntu.go:169] provisioning hostname "newest-cni-032535"
	I0114 03:25:44.894401   20188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:25:44.954992   20188 main.go:134] libmachine: Using SSH client type: native
	I0114 03:25:44.955183   20188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55069 <nil> <nil>}
	I0114 03:25:44.955202   20188 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-032535 && echo "newest-cni-032535" | sudo tee /etc/hostname
	I0114 03:25:45.081207   20188 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-032535
	
	I0114 03:25:45.081310   20188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:25:45.137868   20188 main.go:134] libmachine: Using SSH client type: native
	I0114 03:25:45.138029   20188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55069 <nil> <nil>}
	I0114 03:25:45.138046   20188 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-032535' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-032535/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-032535' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 03:25:45.256990   20188 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 03:25:45.257012   20188 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
	I0114 03:25:45.257032   20188 ubuntu.go:177] setting up certificates
	I0114 03:25:45.257047   20188 provision.go:83] configureAuth start
	I0114 03:25:45.257138   20188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-032535
	I0114 03:25:45.313465   20188 provision.go:138] copyHostCerts
	I0114 03:25:45.313560   20188 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
	I0114 03:25:45.313568   20188 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 03:25:45.313678   20188 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
	I0114 03:25:45.313876   20188 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
	I0114 03:25:45.313882   20188 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 03:25:45.313951   20188 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
	I0114 03:25:45.314115   20188 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
	I0114 03:25:45.314121   20188 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 03:25:45.314202   20188 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
	I0114 03:25:45.314349   20188 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.newest-cni-032535 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-032535]
	I0114 03:25:45.421389   20188 provision.go:172] copyRemoteCerts
	I0114 03:25:45.421509   20188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 03:25:45.421618   20188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:25:45.480693   20188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55069 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:25:45.567571   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 03:25:45.585339   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0114 03:25:45.602266   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0114 03:25:45.619215   20188 provision.go:86] duration metric: configureAuth took 362.152522ms
	I0114 03:25:45.619230   20188 ubuntu.go:193] setting minikube options for container-runtime
	I0114 03:25:45.619382   20188 config.go:180] Loaded profile config "newest-cni-032535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:25:45.619460   20188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:25:45.706608   20188 main.go:134] libmachine: Using SSH client type: native
	I0114 03:25:45.706792   20188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55069 <nil> <nil>}
	I0114 03:25:45.706806   20188 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 03:25:45.827018   20188 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 03:25:45.827037   20188 ubuntu.go:71] root file system type: overlay
	I0114 03:25:45.827190   20188 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 03:25:45.827289   20188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:25:45.884818   20188 main.go:134] libmachine: Using SSH client type: native
	I0114 03:25:45.884982   20188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55069 <nil> <nil>}
	I0114 03:25:45.885037   20188 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 03:25:46.010997   20188 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 03:25:46.011102   20188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:25:46.069090   20188 main.go:134] libmachine: Using SSH client type: native
	I0114 03:25:46.069249   20188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55069 <nil> <nil>}
	I0114 03:25:46.069270   20188 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 03:25:46.657607   20188 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-25 18:00:04.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-14 11:25:46.007599137 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0114 03:25:46.657631   20188 machine.go:91] provisioned docker machine in 1.763334131s
	I0114 03:25:46.657637   20188 client.go:171] LocalClient.Create took 10.327726476s
	I0114 03:25:46.657656   20188 start.go:167] duration metric: libmachine.API.Create for "newest-cni-032535" took 10.327811828s
	I0114 03:25:46.657667   20188 start.go:300] post-start starting for "newest-cni-032535" (driver="docker")
	I0114 03:25:46.657671   20188 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 03:25:46.657747   20188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 03:25:46.657814   20188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:25:46.718077   20188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55069 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:25:46.807317   20188 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 03:25:46.810974   20188 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 03:25:46.810993   20188 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 03:25:46.811000   20188 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 03:25:46.811009   20188 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 03:25:46.811020   20188 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
	I0114 03:25:46.811133   20188 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
	I0114 03:25:46.811336   20188 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
	I0114 03:25:46.811555   20188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 03:25:46.818956   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
	I0114 03:25:46.836007   20188 start.go:303] post-start completed in 178.329306ms
	I0114 03:25:46.836566   20188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-032535
	I0114 03:25:46.893904   20188 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/config.json ...
	I0114 03:25:46.894387   20188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 03:25:46.894457   20188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:25:46.952642   20188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55069 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:25:47.036909   20188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 03:25:47.041576   20188 start.go:128] duration metric: createHost completed in 10.733974781s
	I0114 03:25:47.041591   20188 start.go:83] releasing machines lock for "newest-cni-032535", held for 10.734112408s
	I0114 03:25:47.041694   20188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-032535
	I0114 03:25:47.099684   20188 ssh_runner.go:195] Run: cat /version.json
	I0114 03:25:47.099689   20188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 03:25:47.099760   20188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:25:47.099780   20188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:25:47.162303   20188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55069 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:25:47.163347   20188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55069 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:25:47.302113   20188 ssh_runner.go:195] Run: systemctl --version
	I0114 03:25:47.307168   20188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0114 03:25:47.314621   20188 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0114 03:25:47.327544   20188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 03:25:47.400099   20188 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0114 03:25:47.479252   20188 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 03:25:47.489983   20188 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 03:25:47.490059   20188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 03:25:47.499480   20188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 03:25:47.512637   20188 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 03:25:47.580133   20188 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 03:25:47.653447   20188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 03:25:47.726592   20188 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 03:25:47.937377   20188 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0114 03:25:48.005921   20188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 03:25:48.076419   20188 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0114 03:25:48.085980   20188 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 03:25:48.086064   20188 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0114 03:25:48.090084   20188 start.go:472] Will wait 60s for crictl version
	I0114 03:25:48.090134   20188 ssh_runner.go:195] Run: which crictl
	I0114 03:25:48.093795   20188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 03:25:48.123230   20188 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0114 03:25:48.123331   20188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 03:25:48.152181   20188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 03:25:48.226647   20188 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0114 03:25:48.226844   20188 cli_runner.go:164] Run: docker exec -t newest-cni-032535 dig +short host.docker.internal
	I0114 03:25:48.341964   20188 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 03:25:48.342089   20188 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 03:25:48.346504   20188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 03:25:48.356463   20188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:25:48.436874   20188 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0114 03:25:48.458848   20188 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 03:25:48.459035   20188 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 03:25:48.485490   20188 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 03:25:48.485510   20188 docker.go:543] Images already preloaded, skipping extraction
	I0114 03:25:48.485606   20188 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 03:25:48.510054   20188 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 03:25:48.510075   20188 cache_images.go:84] Images are preloaded, skipping loading
	I0114 03:25:48.510192   20188 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 03:25:48.582086   20188 cni.go:95] Creating CNI manager for ""
	I0114 03:25:48.582106   20188 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:25:48.582125   20188 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0114 03:25:48.582142   20188 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-032535 NodeName:newest-cni-032535 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArg
s:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 03:25:48.582264   20188 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-032535"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 03:25:48.582349   20188 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-032535 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:newest-cni-032535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 03:25:48.582424   20188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 03:25:48.590392   20188 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 03:25:48.590456   20188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 03:25:48.597777   20188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (516 bytes)
	I0114 03:25:48.611036   20188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 03:25:48.624366   20188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I0114 03:25:48.637952   20188 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0114 03:25:48.641822   20188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 03:25:48.651632   20188 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535 for IP: 192.168.67.2
	I0114 03:25:48.651784   20188 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
	I0114 03:25:48.651849   20188 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
	I0114 03:25:48.651902   20188 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/client.key
	I0114 03:25:48.651917   20188 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/client.crt with IP's: []
	I0114 03:25:48.785285   20188 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/client.crt ...
	I0114 03:25:48.785302   20188 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/client.crt: {Name:mk42c72a8f321c478dba300c4f3402f2cda5799c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:25:48.785638   20188 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/client.key ...
	I0114 03:25:48.785646   20188 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/client.key: {Name:mk7e27bb2534c1cdb61d485c8c5638219c5440b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:25:48.785852   20188 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.key.c7fa3a9e
	I0114 03:25:48.785872   20188 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0114 03:25:48.890617   20188 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.crt.c7fa3a9e ...
	I0114 03:25:48.890633   20188 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.crt.c7fa3a9e: {Name:mk81ace58fab863d6a54b6f6cda31c75d3327fda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:25:48.890905   20188 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.key.c7fa3a9e ...
	I0114 03:25:48.890913   20188 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.key.c7fa3a9e: {Name:mkc6ac5250812959cb69e6fc2b771cb526f477e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:25:48.891098   20188 certs.go:320] copying /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.crt
	I0114 03:25:48.891262   20188 certs.go:324] copying /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.key
	I0114 03:25:48.891443   20188 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/proxy-client.key
	I0114 03:25:48.891463   20188 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/proxy-client.crt with IP's: []
	I0114 03:25:49.214818   20188 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/proxy-client.crt ...
	I0114 03:25:49.214834   20188 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/proxy-client.crt: {Name:mkf9412d837003438f7082c4a4b1d60d00d93ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:25:49.215151   20188 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/proxy-client.key ...
	I0114 03:25:49.215160   20188 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/proxy-client.key: {Name:mk0d16fc0b8796b1759441879db5aeb2c75245ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:25:49.215612   20188 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
	W0114 03:25:49.215669   20188 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
	I0114 03:25:49.215687   20188 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 03:25:49.215743   20188 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
	I0114 03:25:49.215784   20188 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
	I0114 03:25:49.215819   20188 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
	I0114 03:25:49.215898   20188 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
	I0114 03:25:49.216413   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 03:25:49.234547   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 03:25:49.252055   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 03:25:49.269473   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0114 03:25:49.286628   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 03:25:49.303744   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 03:25:49.320891   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 03:25:49.338239   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 03:25:49.355382   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
	I0114 03:25:49.373022   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
	I0114 03:25:49.390452   20188 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 03:25:49.407775   20188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 03:25:49.422223   20188 ssh_runner.go:195] Run: openssl version
	I0114 03:25:49.428635   20188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
	I0114 03:25:49.437885   20188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
	I0114 03:25:49.442234   20188 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 03:25:49.442312   20188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
	I0114 03:25:49.448626   20188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
	I0114 03:25:49.457649   20188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
	I0114 03:25:49.466421   20188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
	I0114 03:25:49.470998   20188 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 03:25:49.471061   20188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
	I0114 03:25:49.476909   20188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 03:25:49.485474   20188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 03:25:49.494772   20188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:25:49.499307   20188 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:25:49.499359   20188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:25:49.504991   20188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 03:25:49.513152   20188 kubeadm.go:396] StartCluster: {Name:newest-cni-032535 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-032535 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:25:49.513275   20188 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 03:25:49.536797   20188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 03:25:49.545120   20188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 03:25:49.552527   20188 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0114 03:25:49.552631   20188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 03:25:49.561127   20188 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0114 03:25:49.561178   20188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0114 03:25:49.608904   20188 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I0114 03:25:49.608957   20188 kubeadm.go:317] [preflight] Running pre-flight checks
	I0114 03:25:49.714170   20188 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0114 03:25:49.714337   20188 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0114 03:25:49.714454   20188 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0114 03:25:49.846123   20188 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0114 03:25:49.888586   20188 out.go:204]   - Generating certificates and keys ...
	I0114 03:25:49.888661   20188 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0114 03:25:49.888758   20188 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0114 03:25:50.105259   20188 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0114 03:25:50.271006   20188 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0114 03:25:50.357341   20188 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0114 03:25:50.581214   20188 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0114 03:25:50.797903   20188 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0114 03:25:50.798078   20188 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-032535] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0114 03:25:51.013378   20188 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0114 03:25:51.013489   20188 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-032535] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0114 03:25:51.069158   20188 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0114 03:25:51.169611   20188 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0114 03:25:51.252457   20188 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0114 03:25:51.252505   20188 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0114 03:25:51.422163   20188 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0114 03:25:51.488372   20188 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0114 03:25:51.620676   20188 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0114 03:25:51.791170   20188 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0114 03:25:51.801839   20188 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0114 03:25:51.802523   20188 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0114 03:25:51.802596   20188 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0114 03:25:51.869831   20188 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0114 03:25:51.891430   20188 out.go:204]   - Booting up control plane ...
	I0114 03:25:51.891522   20188 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0114 03:25:51.891620   20188 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0114 03:25:51.891719   20188 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0114 03:25:51.891794   20188 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0114 03:25:51.891915   20188 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-14 11:08:21 UTC, end at Sat 2023-01-14 11:26:02 UTC. --
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: Stopping Docker Application Container Engine...
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[129]: time="2023-01-14T11:08:24.399807470Z" level=info msg="Processing signal 'terminated'"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[129]: time="2023-01-14T11:08:24.400559208Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[129]: time="2023-01-14T11:08:24.400776500Z" level=info msg="Daemon shutdown complete"
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: docker.service: Succeeded.
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: Stopped Docker Application Container Engine.
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: Starting Docker Application Container Engine...
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.449671087Z" level=info msg="Starting up"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.451369542Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.451410150Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.451426946Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.451434106Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.452532702Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.452614323Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.452656109Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.452668984Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.456504219Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.460477786Z" level=info msg="Loading containers: start."
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.537877352Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.569431630Z" level=info msg="Loading containers: done."
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.577586702Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.577652554Z" level=info msg="Daemon has completed initialization"
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: Started Docker Application Container Engine.
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.602452785Z" level=info msg="API listen on [::]:2376"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.605161476Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2023-01-14T11:26:04Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  11:26:04 up  1:25,  0 users,  load average: 1.69, 1.12, 1.07
	Linux old-k8s-version-030235 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-14 11:08:21 UTC, end at Sat 2023-01-14 11:26:04 UTC. --
	Jan 14 11:26:02 old-k8s-version-030235 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 14 11:26:03 old-k8s-version-030235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 927.
	Jan 14 11:26:03 old-k8s-version-030235 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 14 11:26:03 old-k8s-version-030235 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 14 11:26:03 old-k8s-version-030235 kubelet[24838]: I0114 11:26:03.727963   24838 server.go:410] Version: v1.16.0
	Jan 14 11:26:03 old-k8s-version-030235 kubelet[24838]: I0114 11:26:03.728142   24838 plugins.go:100] No cloud provider specified.
	Jan 14 11:26:03 old-k8s-version-030235 kubelet[24838]: I0114 11:26:03.728152   24838 server.go:773] Client rotation is on, will bootstrap in background
	Jan 14 11:26:03 old-k8s-version-030235 kubelet[24838]: I0114 11:26:03.729910   24838 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 14 11:26:03 old-k8s-version-030235 kubelet[24838]: W0114 11:26:03.730601   24838 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 14 11:26:03 old-k8s-version-030235 kubelet[24838]: W0114 11:26:03.730719   24838 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 14 11:26:03 old-k8s-version-030235 kubelet[24838]: F0114 11:26:03.730749   24838 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 14 11:26:03 old-k8s-version-030235 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 14 11:26:03 old-k8s-version-030235 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 14 11:26:04 old-k8s-version-030235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Jan 14 11:26:04 old-k8s-version-030235 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 14 11:26:04 old-k8s-version-030235 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 14 11:26:04 old-k8s-version-030235 kubelet[24850]: I0114 11:26:04.498784   24850 server.go:410] Version: v1.16.0
	Jan 14 11:26:04 old-k8s-version-030235 kubelet[24850]: I0114 11:26:04.499095   24850 plugins.go:100] No cloud provider specified.
	Jan 14 11:26:04 old-k8s-version-030235 kubelet[24850]: I0114 11:26:04.499133   24850 server.go:773] Client rotation is on, will bootstrap in background
	Jan 14 11:26:04 old-k8s-version-030235 kubelet[24850]: I0114 11:26:04.500983   24850 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 14 11:26:04 old-k8s-version-030235 kubelet[24850]: W0114 11:26:04.501741   24850 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 14 11:26:04 old-k8s-version-030235 kubelet[24850]: W0114 11:26:04.501815   24850 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 14 11:26:04 old-k8s-version-030235 kubelet[24850]: F0114 11:26:04.501840   24850 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 14 11:26:04 old-k8s-version-030235 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 14 11:26:04 old-k8s-version-030235 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 03:26:04.651875   20346 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-030235 -n old-k8s-version-030235
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-030235 -n old-k8s-version-030235: exit status 2 (409.892763ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-030235" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:27:07.589721    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:27:11.869324    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:27:46.627174    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:27:58.428104    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:28:15.063428    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:28:59.176284    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:29:00.537567    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:29:19.860768    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:29:27.571846    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:29:38.159537    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
E0114 03:29:38.165100    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
E0114 03:29:38.176597    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
E0114 03:29:38.197202    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
E0114 03:29:38.237297    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
E0114 03:29:38.319497    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
E0114 03:29:38.481397    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
E0114 03:29:38.803033    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
E0114 03:29:39.444571    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
E0114 03:29:40.725805    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
E0114 03:29:43.286487    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:29:48.406943    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:29:58.649385    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:30:19.131881    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:30:31.240082    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:30:41.637000    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:30:54.876578    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:31:00.093666    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:31:54.292552    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:32:07.593655    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:32:11.873505    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:32:22.016916    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:54078/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0114 03:32:30.616468    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0114 03:32:46.630685    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0114 03:32:58.431315    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0114 03:33:15.067611    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0114 03:33:44.683697    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0114 03:33:59.179555    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0114 03:34:00.543146    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0114 03:34:19.863328    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0114 03:34:27.573824    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0114 03:34:38.163168    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0114 03:35:05.868057    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/default-k8s-diff-port-031843/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-030235 -n old-k8s-version-030235
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-030235 -n old-k8s-version-030235: exit status 2 (436.384361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-030235" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-030235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-030235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.109µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-030235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-030235
helpers_test.go:235: (dbg) docker inspect old-k8s-version-030235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942",
	        "Created": "2023-01-14T11:02:44.471910321Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-14T11:08:21.837144259Z",
	            "FinishedAt": "2023-01-14T11:08:18.899183667Z"
	        },
	        "Image": "sha256:fc27a248bc74ebfeee6e23949796c0207c7892e924b780fcd7204ed3e09ea2a3",
	        "ResolvConfPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hostname",
	        "HostsPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/hosts",
	        "LogPath": "/var/lib/docker/containers/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942/d977528adcbe29b4d7ab3f9e8738960d61890b45ab28eb5441685433d0ecd942-json.log",
	        "Name": "/old-k8s-version-030235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-030235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-030235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408-init/diff:/var/lib/docker/overlay2/74c9e0d36b5b0c73e7df7f4bce3bd0c3d02cf9dc383bffd6fbcff44769e0e62a/diff:/var/lib/docker/overlay2/ba601a6c163e2d067928a6364b090a9785c3dd2470d90823ce10e62a47aa569f/diff:/var/lib/docker/overlay2/80b54fffffd853e7ba8f14b1c1ac90a8b75fb31aafab2d53fe628cb592a95844/diff:/var/lib/docker/overlay2/02213d03e53450db4a2d492831eba720749d97435157430d240b760477b64c78/diff:/var/lib/docker/overlay2/e3727b5662aa5fdeeef9053112ad90fb2f9aaecbfeeddefa3efb066881ae1677/diff:/var/lib/docker/overlay2/685adc0695be0cb9862d43898ceae6e6a36c3cc98f04bc25e314797bed3b1d95/diff:/var/lib/docker/overlay2/7e133e132419c5ad6565f89b3ecfdf2c9fa038e5b9c39fe81c1269cfb6bb0d22/diff:/var/lib/docker/overlay2/c4d27ebf7e050a3aee0acccdadb92fc9390befadef2b0b13b9ebe87a2af3ef50/diff:/var/lib/docker/overlay2/0f07a86eba9c199451031724816d33cb5d2e19c401514edd8c1e392fd795f1e1/diff:/var/lib/docker/overlay2/a51cfe
8ee6145a30d356888e940bfdda67bc55c29f3972b35ae93dd989943b1c/diff:/var/lib/docker/overlay2/b155ac1a426201afe2af9fba8a7ebbecd3d8271f8613d0f53dac7bb190bc977f/diff:/var/lib/docker/overlay2/7c5cec64dde89a12b95bb1a0bca411b06b69201cfdb3cc4b46cb87a5bcff9a7f/diff:/var/lib/docker/overlay2/dd54bb055fc70a41daa3f3e950f4bdadd925db2c588d7d831edb4cbb176d30c7/diff:/var/lib/docker/overlay2/f58b39c756189e32d5b9c66b5c3861eabf5ab01ebc6179fec7210d414762bf45/diff:/var/lib/docker/overlay2/6458e00e4b79399a4860e78a572cd21fd47cbca2a54d189f34bd4a438145a6f5/diff:/var/lib/docker/overlay2/66427e9f49ff5383f9f819513857efb87ee3f880df33a86ac46ebc140ff172ed/diff:/var/lib/docker/overlay2/33f03d40d23c6a829c43633ba96c4058fbf09a4cf912eb51e0ca23a65574b0a7/diff:/var/lib/docker/overlay2/e68584e2b5a5a18fbd6edeeba6d80fe43e2199775b520878ca842d463078a2d1/diff:/var/lib/docker/overlay2/a2bfe134a89cb821f2c8e5ec6b42888d30fac6a9ed1aa4853476bb33cfe2e157/diff:/var/lib/docker/overlay2/f55951d7e041b300f9842916d51648285b79860a132d032d3c23b80af7c280fa/diff:/var/lib/d
ocker/overlay2/76cb0b8d6987165c472c0c9d54491045539294d203577a4ed7fac7f7cbbf0322/diff:/var/lib/docker/overlay2/a8f6d057d4938258302dd54e9a2e99732b4a2ac5c869366e93983e3e8890d432/diff:/var/lib/docker/overlay2/16bf4a461f9fe0edba90225f752527e534469b1bfbeb5bca6315512786340bfe/diff:/var/lib/docker/overlay2/2d022a51ddd598853537ff8fbeca5b94beff9d5d7e6ca81ffe011aa35121268a/diff:/var/lib/docker/overlay2/e30d56ebfba93be441f305b1938dd2d0f847f649922524ebef1fbe3e4b3b4bf9/diff:/var/lib/docker/overlay2/12df07bd2576a7b97f383aa3fcb2535f75a901953859063d9b65944d2dd0b152/diff:/var/lib/docker/overlay2/79e70748fe1267851a900b8bca2ab4e0b34e8163714fc440602d9e0273c93421/diff:/var/lib/docker/overlay2/c4fa6441d4ff7ce1be2072a8f61c5c495ff1785d9fee891191262b893a6eff63/diff:/var/lib/docker/overlay2/748980353d2fab0e6498a85b0c558d9eb7f34703302b21298c310b98dcf4d6f9/diff:/var/lib/docker/overlay2/48f823bc2f4741841d95ac4706f52fe9d01883bce998d5c999bdc363c838b1ee/diff:/var/lib/docker/overlay2/5f4f42c0e92359fc7ea2cf540120bd09407fd1d8dee5b56896919b39d3e
70033/diff:/var/lib/docker/overlay2/4a4066d1d0f42bb48af787d9f9bd115bacffde91f4ca8c20648dad3b25f904b6/diff:/var/lib/docker/overlay2/5f1054f553934c922e4dffc5c3804a5825ed249f7df9c3da31e2081145c8749a/diff:/var/lib/docker/overlay2/a6fe8ece465ba51837f6a88e28c3b571b632f0b223900278ac4a5f5dc0577520/diff:/var/lib/docker/overlay2/ee3e9af6d65fe9d2da423711b90ee171fd35422619c22b802d5fead4f861d921/diff:/var/lib/docker/overlay2/b353b985af8b2f665218f5af5e89cb642745824e2c3b51bfe3aa58c801823c46/diff:/var/lib/docker/overlay2/4411168ee372991c59d386d2ec200449c718a5343f5efa545ad9552a5c349310/diff:/var/lib/docker/overlay2/eeb668637d75a5802fe62d8a71458c68195302676ff09eb1e973d633e24e8588/diff:/var/lib/docker/overlay2/67b1dd580c0c0e994c4fe1233fef817d2c085438c80485c1f2eec64392c7b709/diff:/var/lib/docker/overlay2/1ae992d82b2e0a4c2a667c7d0d9e243efda7ee206e17c862bf093fa976667cc3/diff:/var/lib/docker/overlay2/ab6d393733a7abd2a9bd5612a0cef5adc3cded30c596c212828a8475c9c29779/diff:/var/lib/docker/overlay2/c927272ea82dc6bb318adcf8eb94099eece7af
9df7f454ff921048ba7ce589d2/diff:/var/lib/docker/overlay2/722309d1402eda210190af6c69b6f9998aff66e78e5bbc972ae865d10f0474d7/diff:/var/lib/docker/overlay2/c8a4e498ea2b5c051ced01db75d10e4ed1619bd3acc28c000789b600f8a7e23b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec3eccb64083cca531e170ed100ae49bcda69dc3a0b152caee88100d92223408/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-030235",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-030235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-030235",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-030235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96e0c70534d1f6a1812c15eb3499843abdb380deba02ee4637e8918b0f3daae3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54079"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54080"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54076"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54077"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "54078"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/96e0c70534d1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-030235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d977528adcbe",
	                        "old-k8s-version-030235"
	                    ],
	                    "NetworkID": "ab958c8662819925836c350f1443c8060424291379d9dc2b6c89656fa5f7da2a",
	                    "EndpointID": "99ff9fa9f16b08cacf98f575b4464b9b756d4f1cf10c888cca45473adbdc8e4e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235: exit status 2 (392.346091ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-030235 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-030235 logs -n 25: (3.454802027s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-031128                                      | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:18 PST | 14 Jan 23 03:18 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-031128                                      | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:18 PST | 14 Jan 23 03:18 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-031128                                      | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:18 PST | 14 Jan 23 03:18 PST |
	| delete  | -p embed-certs-031128                                      | embed-certs-031128           | jenkins | v1.28.0 | 14 Jan 23 03:18 PST | 14 Jan 23 03:18 PST |
	| delete  | -p                                                         | disable-driver-mounts-031842 | jenkins | v1.28.0 | 14 Jan 23 03:18 PST | 14 Jan 23 03:18 PST |
	|         | disable-driver-mounts-031842                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:18 PST | 14 Jan 23 03:19 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:19 PST | 14 Jan 23 03:19 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:19 PST | 14 Jan 23 03:20 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-031843           | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:20 PST | 14 Jan 23 03:20 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:20 PST | 14 Jan 23 03:25 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:25 PST | 14 Jan 23 03:25 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                              |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:25 PST | 14 Jan 23 03:25 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:25 PST | 14 Jan 23 03:25 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:25 PST | 14 Jan 23 03:25 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-031843 | jenkins | v1.28.0 | 14 Jan 23 03:25 PST | 14 Jan 23 03:25 PST |
	|         | default-k8s-diff-port-031843                               |                              |         |         |                     |                     |
	| start   | -p newest-cni-032535 --memory=2200 --alsologtostderr       | newest-cni-032535            | jenkins | v1.28.0 | 14 Jan 23 03:25 PST | 14 Jan 23 03:26 PST |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-032535                 | newest-cni-032535            | jenkins | v1.28.0 | 14 Jan 23 03:26 PST | 14 Jan 23 03:26 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-032535                                       | newest-cni-032535            | jenkins | v1.28.0 | 14 Jan 23 03:26 PST | 14 Jan 23 03:26 PST |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-032535                      | newest-cni-032535            | jenkins | v1.28.0 | 14 Jan 23 03:26 PST | 14 Jan 23 03:26 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-032535 --memory=2200 --alsologtostderr       | newest-cni-032535            | jenkins | v1.28.0 | 14 Jan 23 03:26 PST | 14 Jan 23 03:26 PST |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-032535 sudo                                  | newest-cni-032535            | jenkins | v1.28.0 | 14 Jan 23 03:26 PST | 14 Jan 23 03:26 PST |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-032535                                       | newest-cni-032535            | jenkins | v1.28.0 | 14 Jan 23 03:26 PST | 14 Jan 23 03:26 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-032535                                       | newest-cni-032535            | jenkins | v1.28.0 | 14 Jan 23 03:26 PST | 14 Jan 23 03:26 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-032535                                       | newest-cni-032535            | jenkins | v1.28.0 | 14 Jan 23 03:26 PST | 14 Jan 23 03:26 PST |
	| delete  | -p newest-cni-032535                                       | newest-cni-032535            | jenkins | v1.28.0 | 14 Jan 23 03:26 PST | 14 Jan 23 03:26 PST |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 03:26:32
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 03:26:32.686373   20464 out.go:296] Setting OutFile to fd 1 ...
	I0114 03:26:32.686534   20464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:26:32.686541   20464 out.go:309] Setting ErrFile to fd 2...
	I0114 03:26:32.686545   20464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 03:26:32.686661   20464 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 03:26:32.687143   20464 out.go:303] Setting JSON to false
	I0114 03:26:32.705662   20464 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5166,"bootTime":1673690426,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 03:26:32.705753   20464 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 03:26:32.727781   20464 out.go:177] * [newest-cni-032535] minikube v1.28.0 on Darwin 13.0.1
	I0114 03:26:32.770717   20464 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 03:26:32.770732   20464 notify.go:220] Checking for updates...
	I0114 03:26:32.815471   20464 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 03:26:32.837496   20464 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 03:26:32.859243   20464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 03:26:32.880631   20464 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	I0114 03:26:32.903158   20464 config.go:180] Loaded profile config "newest-cni-032535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:26:32.903851   20464 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 03:26:32.965046   20464 docker.go:138] docker version: linux-20.10.21
	I0114 03:26:32.965192   20464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 03:26:33.107060   20464 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 11:26:33.015430511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 03:26:33.149713   20464 out.go:177] * Using the docker driver based on existing profile
	I0114 03:26:33.170690   20464 start.go:294] selected driver: docker
	I0114 03:26:33.170716   20464 start.go:838] validating driver "docker" against &{Name:newest-cni-032535 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-032535 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:26:33.170851   20464 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 03:26:33.174875   20464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 03:26:33.315112   20464 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 11:26:33.224778717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 03:26:33.315271   20464 start_flags.go:936] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0114 03:26:33.315290   20464 cni.go:95] Creating CNI manager for ""
	I0114 03:26:33.315300   20464 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:26:33.315312   20464 start_flags.go:319] config:
	{Name:newest-cni-032535 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-032535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:26:33.358835   20464 out.go:177] * Starting control plane node newest-cni-032535 in cluster newest-cni-032535
	I0114 03:26:33.380781   20464 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 03:26:33.402572   20464 out.go:177] * Pulling base image ...
	I0114 03:26:33.445850   20464 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 03:26:33.445866   20464 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 03:26:33.445957   20464 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 03:26:33.445973   20464 cache.go:57] Caching tarball of preloaded images
	I0114 03:26:33.446185   20464 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 03:26:33.446208   20464 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 03:26:33.447226   20464 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/config.json ...
	I0114 03:26:33.502280   20464 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon, skipping pull
	I0114 03:26:33.502297   20464 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in daemon, skipping load
	I0114 03:26:33.502317   20464 cache.go:193] Successfully downloaded all kic artifacts
	I0114 03:26:33.502354   20464 start.go:364] acquiring machines lock for newest-cni-032535: {Name:mkd4f8cc2b2c691682dbced0ece05c9d995f481f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 03:26:33.502446   20464 start.go:368] acquired machines lock for "newest-cni-032535" in 67.444µs
	I0114 03:26:33.502471   20464 start.go:96] Skipping create...Using existing machine configuration
	I0114 03:26:33.502482   20464 fix.go:55] fixHost starting: 
	I0114 03:26:33.502757   20464 cli_runner.go:164] Run: docker container inspect newest-cni-032535 --format={{.State.Status}}
	I0114 03:26:33.560650   20464 fix.go:103] recreateIfNeeded on newest-cni-032535: state=Stopped err=<nil>
	W0114 03:26:33.560679   20464 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 03:26:33.582583   20464 out.go:177] * Restarting existing docker container for "newest-cni-032535" ...
	I0114 03:26:33.604732   20464 cli_runner.go:164] Run: docker start newest-cni-032535
	I0114 03:26:33.939900   20464 cli_runner.go:164] Run: docker container inspect newest-cni-032535 --format={{.State.Status}}
	I0114 03:26:34.000279   20464 kic.go:426] container "newest-cni-032535" state is running.
	I0114 03:26:34.000957   20464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-032535
	I0114 03:26:34.066804   20464 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/config.json ...
	I0114 03:26:34.067423   20464 machine.go:88] provisioning docker machine ...
	I0114 03:26:34.067518   20464 ubuntu.go:169] provisioning hostname "newest-cni-032535"
	I0114 03:26:34.067663   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:34.133990   20464 main.go:134] libmachine: Using SSH client type: native
	I0114 03:26:34.134188   20464 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55187 <nil> <nil>}
	I0114 03:26:34.134205   20464 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-032535 && echo "newest-cni-032535" | sudo tee /etc/hostname
	I0114 03:26:34.267160   20464 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-032535
	
	I0114 03:26:34.267266   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:34.328119   20464 main.go:134] libmachine: Using SSH client type: native
	I0114 03:26:34.328275   20464 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55187 <nil> <nil>}
	I0114 03:26:34.328288   20464 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-032535' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-032535/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-032535' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 03:26:34.446006   20464 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 03:26:34.446042   20464 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1559/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1559/.minikube}
	I0114 03:26:34.446075   20464 ubuntu.go:177] setting up certificates
	I0114 03:26:34.446090   20464 provision.go:83] configureAuth start
	I0114 03:26:34.446202   20464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-032535
	I0114 03:26:34.514663   20464 provision.go:138] copyHostCerts
	I0114 03:26:34.514777   20464 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem, removing ...
	I0114 03:26:34.514788   20464 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem
	I0114 03:26:34.514898   20464 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.pem (1082 bytes)
	I0114 03:26:34.515186   20464 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem, removing ...
	I0114 03:26:34.515195   20464 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem
	I0114 03:26:34.515259   20464 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/cert.pem (1123 bytes)
	I0114 03:26:34.515455   20464 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem, removing ...
	I0114 03:26:34.515475   20464 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem
	I0114 03:26:34.515560   20464 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1559/.minikube/key.pem (1679 bytes)
	I0114 03:26:34.515757   20464 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem org=jenkins.newest-cni-032535 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-032535]
	I0114 03:26:34.650434   20464 provision.go:172] copyRemoteCerts
	I0114 03:26:34.650514   20464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 03:26:34.650590   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:34.711885   20464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55187 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:26:34.796232   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0114 03:26:34.816063   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0114 03:26:34.833050   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 03:26:34.849978   20464 provision.go:86] duration metric: configureAuth took 403.867068ms
	I0114 03:26:34.849994   20464 ubuntu.go:193] setting minikube options for container-runtime
	I0114 03:26:34.850171   20464 config.go:180] Loaded profile config "newest-cni-032535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:26:34.850254   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:34.908278   20464 main.go:134] libmachine: Using SSH client type: native
	I0114 03:26:34.908423   20464 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55187 <nil> <nil>}
	I0114 03:26:34.908432   20464 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 03:26:35.026729   20464 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0114 03:26:35.026745   20464 ubuntu.go:71] root file system type: overlay
	I0114 03:26:35.026885   20464 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 03:26:35.026982   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:35.085271   20464 main.go:134] libmachine: Using SSH client type: native
	I0114 03:26:35.085434   20464 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55187 <nil> <nil>}
	I0114 03:26:35.085486   20464 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 03:26:35.210019   20464 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 03:26:35.210135   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:35.268565   20464 main.go:134] libmachine: Using SSH client type: native
	I0114 03:26:35.268719   20464 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil>  [] 0s} 127.0.0.1 55187 <nil> <nil>}
	I0114 03:26:35.268732   20464 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 03:26:35.389955   20464 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 03:26:35.389970   20464 machine.go:91] provisioned docker machine in 1.322508305s
	I0114 03:26:35.389980   20464 start.go:300] post-start starting for "newest-cni-032535" (driver="docker")
	I0114 03:26:35.389985   20464 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 03:26:35.390083   20464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 03:26:35.390162   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:35.447630   20464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55187 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:26:35.531736   20464 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 03:26:35.535413   20464 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0114 03:26:35.535431   20464 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0114 03:26:35.535443   20464 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0114 03:26:35.535450   20464 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0114 03:26:35.535457   20464 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/addons for local assets ...
	I0114 03:26:35.535548   20464 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1559/.minikube/files for local assets ...
	I0114 03:26:35.535713   20464 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem -> 27282.pem in /etc/ssl/certs
	I0114 03:26:35.535907   20464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 03:26:35.543344   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /etc/ssl/certs/27282.pem (1708 bytes)
	I0114 03:26:35.560273   20464 start.go:303] post-start completed in 170.282215ms
	I0114 03:26:35.560355   20464 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 03:26:35.560423   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:35.619738   20464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55187 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:26:35.705022   20464 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0114 03:26:35.709701   20464 fix.go:57] fixHost completed within 2.207195621s
	I0114 03:26:35.709714   20464 start.go:83] releasing machines lock for "newest-cni-032535", held for 2.207234313s
	I0114 03:26:35.709815   20464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-032535
	I0114 03:26:35.767786   20464 ssh_runner.go:195] Run: cat /version.json
	I0114 03:26:35.767802   20464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 03:26:35.767870   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:35.767875   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:35.830225   20464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55187 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:26:35.830500   20464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55187 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:26:35.968192   20464 ssh_runner.go:195] Run: systemctl --version
	I0114 03:26:35.973402   20464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0114 03:26:35.980932   20464 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0114 03:26:35.993745   20464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 03:26:36.060785   20464 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0114 03:26:36.149750   20464 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 03:26:36.160201   20464 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0114 03:26:36.160277   20464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 03:26:36.169814   20464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 03:26:36.182610   20464 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 03:26:36.246241   20464 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 03:26:36.315213   20464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 03:26:36.388213   20464 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 03:26:36.629474   20464 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0114 03:26:36.699325   20464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 03:26:36.801787   20464 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0114 03:26:36.819135   20464 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 03:26:36.819237   20464 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0114 03:26:36.823899   20464 start.go:472] Will wait 60s for crictl version
	I0114 03:26:36.823948   20464 ssh_runner.go:195] Run: which crictl
	I0114 03:26:36.827855   20464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 03:26:36.872729   20464 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0114 03:26:36.872826   20464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 03:26:36.903441   20464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 03:26:36.979260   20464 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0114 03:26:36.979434   20464 cli_runner.go:164] Run: docker exec -t newest-cni-032535 dig +short host.docker.internal
	I0114 03:26:37.095012   20464 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0114 03:26:37.095141   20464 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0114 03:26:37.099721   20464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 03:26:37.110060   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:37.192982   20464 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0114 03:26:37.214761   20464 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 03:26:37.214914   20464 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 03:26:37.240992   20464 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 03:26:37.241014   20464 docker.go:543] Images already preloaded, skipping extraction
	I0114 03:26:37.241106   20464 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 03:26:37.265910   20464 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 03:26:37.265931   20464 cache_images.go:84] Images are preloaded, skipping loading
	I0114 03:26:37.266027   20464 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 03:26:37.337830   20464 cni.go:95] Creating CNI manager for ""
	I0114 03:26:37.337847   20464 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:26:37.337899   20464 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0114 03:26:37.337918   20464 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-032535 NodeName:newest-cni-032535 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArg
s:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 03:26:37.338039   20464 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-032535"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 03:26:37.338130   20464 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-032535 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:newest-cni-032535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 03:26:37.338209   20464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 03:26:37.346422   20464 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 03:26:37.346491   20464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 03:26:37.354075   20464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (516 bytes)
	I0114 03:26:37.367595   20464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 03:26:37.380481   20464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I0114 03:26:37.393844   20464 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0114 03:26:37.397916   20464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 03:26:37.408745   20464 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535 for IP: 192.168.67.2
	I0114 03:26:37.408855   20464 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key
	I0114 03:26:37.408915   20464 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key
	I0114 03:26:37.409007   20464 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/client.key
	I0114 03:26:37.409075   20464 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.key.c7fa3a9e
	I0114 03:26:37.409133   20464 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/proxy-client.key
	I0114 03:26:37.409376   20464 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem (1338 bytes)
	W0114 03:26:37.409422   20464 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728_empty.pem, impossibly tiny 0 bytes
	I0114 03:26:37.409439   20464 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 03:26:37.409477   20464 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/ca.pem (1082 bytes)
	I0114 03:26:37.409515   20464 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/cert.pem (1123 bytes)
	I0114 03:26:37.409555   20464 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/certs/key.pem (1679 bytes)
	I0114 03:26:37.409634   20464 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem (1708 bytes)
	I0114 03:26:37.410203   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 03:26:37.429361   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 03:26:37.449942   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 03:26:37.471631   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/newest-cni-032535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0114 03:26:37.489932   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 03:26:37.511524   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0114 03:26:37.529788   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 03:26:37.548873   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0114 03:26:37.567552   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/certs/2728.pem --> /usr/share/ca-certificates/2728.pem (1338 bytes)
	I0114 03:26:37.585430   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/ssl/certs/27282.pem --> /usr/share/ca-certificates/27282.pem (1708 bytes)
	I0114 03:26:37.605549   20464 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1559/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 03:26:37.624958   20464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 03:26:37.638726   20464 ssh_runner.go:195] Run: openssl version
	I0114 03:26:37.644694   20464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27282.pem && ln -fs /usr/share/ca-certificates/27282.pem /etc/ssl/certs/27282.pem"
	I0114 03:26:37.653531   20464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27282.pem
	I0114 03:26:37.657801   20464 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:11 /usr/share/ca-certificates/27282.pem
	I0114 03:26:37.657859   20464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27282.pem
	I0114 03:26:37.663582   20464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27282.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 03:26:37.671689   20464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 03:26:37.679727   20464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:26:37.683874   20464 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:26:37.683940   20464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 03:26:37.689945   20464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 03:26:37.719934   20464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2728.pem && ln -fs /usr/share/ca-certificates/2728.pem /etc/ssl/certs/2728.pem"
	I0114 03:26:37.728037   20464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2728.pem
	I0114 03:26:37.732022   20464 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:11 /usr/share/ca-certificates/2728.pem
	I0114 03:26:37.732076   20464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2728.pem
	I0114 03:26:37.737545   20464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2728.pem /etc/ssl/certs/51391683.0"
	I0114 03:26:37.745056   20464 kubeadm.go:396] StartCluster: {Name:newest-cni-032535 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-032535 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 03:26:37.745183   20464 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 03:26:37.768860   20464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 03:26:37.776896   20464 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 03:26:37.776912   20464 kubeadm.go:627] restartCluster start
	I0114 03:26:37.776968   20464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 03:26:37.783908   20464 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:37.783990   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:37.843303   20464 kubeconfig.go:135] verify returned: extract IP: "newest-cni-032535" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 03:26:37.843458   20464 kubeconfig.go:146] "newest-cni-032535" context is missing from /Users/jenkins/minikube-integration/15642-1559/kubeconfig - will repair!
	I0114 03:26:37.843785   20464 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/kubeconfig: {Name:mkb6d1db5780815291441dc67b348461b9325651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:26:37.844983   20464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 03:26:37.852749   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:37.852810   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:37.861387   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:38.062799   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:38.062921   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:38.074026   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:38.262762   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:38.262943   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:38.273216   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:38.462658   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:38.462834   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:38.473908   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:38.663531   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:38.663719   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:38.674882   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:38.862005   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:38.862206   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:38.873268   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:39.061799   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:39.061937   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:39.073438   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:39.262550   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:39.262691   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:39.273988   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:39.462908   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:39.463062   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:39.474019   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:39.661723   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:39.661799   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:39.671304   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:39.861557   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:39.861735   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:39.872747   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:40.063569   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:40.063720   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:40.074807   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:40.262575   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:40.262709   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:40.273812   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:40.461491   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:40.461594   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:40.471545   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:40.663561   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:40.663770   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:40.674675   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:40.861628   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:40.861831   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:40.872661   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:40.872672   20464 api_server.go:165] Checking apiserver status ...
	I0114 03:26:40.872728   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0114 03:26:40.880969   20464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:40.880981   20464 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0114 03:26:40.880990   20464 kubeadm.go:1114] stopping kube-system containers ...
	I0114 03:26:40.881070   20464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0114 03:26:40.906251   20464 docker.go:444] Stopping containers: [ed151604cb59 8dc4bf203752 2634c4a976d5 e9dff9d38796 5d7354460ce2 e4386b659744 e818bdcfe0a5 cdfb083d64f8 6bb7d5ab1667 aebc07e4bacf de5410bfc12e ac0034216189 c160f387c36b 9e7f00fb8f94 29f9fd35dec3 a57731c8d5dd]
	I0114 03:26:40.906347   20464 ssh_runner.go:195] Run: docker stop ed151604cb59 8dc4bf203752 2634c4a976d5 e9dff9d38796 5d7354460ce2 e4386b659744 e818bdcfe0a5 cdfb083d64f8 6bb7d5ab1667 aebc07e4bacf de5410bfc12e ac0034216189 c160f387c36b 9e7f00fb8f94 29f9fd35dec3 a57731c8d5dd
	I0114 03:26:40.929624   20464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 03:26:40.940079   20464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 03:26:40.948186   20464 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan 14 11:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan 14 11:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan 14 11:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 14 11:25 /etc/kubernetes/scheduler.conf
	
	I0114 03:26:40.948258   20464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 03:26:40.955815   20464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 03:26:40.963449   20464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 03:26:40.970626   20464 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:40.970688   20464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0114 03:26:40.977852   20464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 03:26:40.985525   20464 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 03:26:40.985582   20464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0114 03:26:40.992675   20464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 03:26:41.000236   20464 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 03:26:41.000248   20464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:26:41.050003   20464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:26:41.623954   20464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:26:41.759784   20464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:26:41.809625   20464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:26:41.944483   20464 api_server.go:51] waiting for apiserver process to appear ...
	I0114 03:26:41.944576   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:26:42.457464   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:26:42.957385   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:26:43.457748   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:26:43.470302   20464 api_server.go:71] duration metric: took 1.525800748s to wait for apiserver process to appear ...
	I0114 03:26:43.470324   20464 api_server.go:87] waiting for apiserver healthz status ...
	I0114 03:26:43.470341   20464 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55191/healthz ...
	I0114 03:26:46.457473   20464 api_server.go:278] https://127.0.0.1:55191/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0114 03:26:46.457495   20464 api_server.go:102] status: https://127.0.0.1:55191/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0114 03:26:46.958082   20464 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55191/healthz ...
	I0114 03:26:46.965171   20464 api_server.go:278] https://127.0.0.1:55191/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 03:26:46.965188   20464 api_server.go:102] status: https://127.0.0.1:55191/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 03:26:47.457757   20464 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55191/healthz ...
	I0114 03:26:47.463294   20464 api_server.go:278] https://127.0.0.1:55191/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 03:26:47.463315   20464 api_server.go:102] status: https://127.0.0.1:55191/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 03:26:47.957620   20464 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55191/healthz ...
	I0114 03:26:47.964047   20464 api_server.go:278] https://127.0.0.1:55191/healthz returned 200:
	ok
	I0114 03:26:47.977082   20464 api_server.go:140] control plane version: v1.25.3
	I0114 03:26:47.977098   20464 api_server.go:130] duration metric: took 4.506715295s to wait for apiserver health ...
	I0114 03:26:47.977104   20464 cni.go:95] Creating CNI manager for ""
	I0114 03:26:47.977110   20464 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 03:26:47.977122   20464 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 03:26:47.988293   20464 system_pods.go:59] 8 kube-system pods found
	I0114 03:26:47.988335   20464 system_pods.go:61] "coredns-565d847f94-wwhpm" [d9cd76c9-2333-4dd8-977c-632d051bb7b9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0114 03:26:47.988343   20464 system_pods.go:61] "etcd-newest-cni-032535" [47ef02ab-bad3-41a1-b381-f0c86b6158ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 03:26:47.988348   20464 system_pods.go:61] "kube-apiserver-newest-cni-032535" [f2963c19-897a-4a7f-8613-966cb8030da9] Running
	I0114 03:26:47.988353   20464 system_pods.go:61] "kube-controller-manager-newest-cni-032535" [644b7f9f-9d48-467a-a06b-a82ea54d86ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0114 03:26:47.988361   20464 system_pods.go:61] "kube-proxy-r4244" [7f5a5549-a1d3-464b-8319-08d5cb751621] Running
	I0114 03:26:47.988367   20464 system_pods.go:61] "kube-scheduler-newest-cni-032535" [31d847ee-89c8-410d-ae31-ae7832da63ce] Running
	I0114 03:26:47.988371   20464 system_pods.go:61] "metrics-server-5c8fd5cf8-7h8zc" [edcfa060-e5f0-4ab7-8af4-dfedc7f392c3] Pending
	I0114 03:26:47.988376   20464 system_pods.go:61] "storage-provisioner" [14fc0378-c593-4659-ba9e-a57fb47e07f3] Running
	I0114 03:26:47.988382   20464 system_pods.go:74] duration metric: took 11.254414ms to wait for pod list to return data ...
	I0114 03:26:47.988389   20464 node_conditions.go:102] verifying NodePressure condition ...
	I0114 03:26:47.994044   20464 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 03:26:47.994061   20464 node_conditions.go:123] node cpu capacity is 6
	I0114 03:26:47.994071   20464 node_conditions.go:105] duration metric: took 5.678566ms to run NodePressure ...
	I0114 03:26:47.994084   20464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 03:26:48.447499   20464 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 03:26:48.457656   20464 ops.go:34] apiserver oom_adj: -16
	I0114 03:26:48.457670   20464 kubeadm.go:631] restartCluster took 10.680627703s
	I0114 03:26:48.457686   20464 kubeadm.go:398] StartCluster complete in 10.712509884s
	I0114 03:26:48.457705   20464 settings.go:142] acquiring lock: {Name:mka95467446367990e489ec54b84107091d6186f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:26:48.457816   20464 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 03:26:48.458465   20464 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1559/kubeconfig: {Name:mkb6d1db5780815291441dc67b348461b9325651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 03:26:48.462322   20464 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-032535" rescaled to 1
	I0114 03:26:48.462365   20464 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 03:26:48.462395   20464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 03:26:48.486749   20464 out.go:177] * Verifying Kubernetes components...
	I0114 03:26:48.462431   20464 addons.go:486] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0114 03:26:48.462579   20464 config.go:180] Loaded profile config "newest-cni-032535": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 03:26:48.544370   20464 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-032535"
	I0114 03:26:48.544396   20464 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-032535"
	I0114 03:26:48.544404   20464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 03:26:48.544412   20464 addons.go:65] Setting metrics-server=true in profile "newest-cni-032535"
	I0114 03:26:48.544414   20464 addons.go:65] Setting dashboard=true in profile "newest-cni-032535"
	W0114 03:26:48.544404   20464 addons.go:236] addon storage-provisioner should already be in state true
	I0114 03:26:48.544432   20464 addons.go:227] Setting addon dashboard=true in "newest-cni-032535"
	I0114 03:26:48.544438   20464 addons.go:227] Setting addon metrics-server=true in "newest-cni-032535"
	W0114 03:26:48.544445   20464 addons.go:236] addon dashboard should already be in state true
	W0114 03:26:48.544451   20464 addons.go:236] addon metrics-server should already be in state true
	I0114 03:26:48.544404   20464 addons.go:65] Setting default-storageclass=true in profile "newest-cni-032535"
	I0114 03:26:48.544493   20464 host.go:66] Checking if "newest-cni-032535" exists ...
	I0114 03:26:48.544495   20464 host.go:66] Checking if "newest-cni-032535" exists ...
	I0114 03:26:48.544527   20464 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-032535"
	I0114 03:26:48.544496   20464 host.go:66] Checking if "newest-cni-032535" exists ...
	I0114 03:26:48.545077   20464 cli_runner.go:164] Run: docker container inspect newest-cni-032535 --format={{.State.Status}}
	I0114 03:26:48.545106   20464 cli_runner.go:164] Run: docker container inspect newest-cni-032535 --format={{.State.Status}}
	I0114 03:26:48.545113   20464 cli_runner.go:164] Run: docker container inspect newest-cni-032535 --format={{.State.Status}}
	I0114 03:26:48.545232   20464 cli_runner.go:164] Run: docker container inspect newest-cni-032535 --format={{.State.Status}}
	I0114 03:26:48.651165   20464 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0114 03:26:48.646271   20464 addons.go:227] Setting addon default-storageclass=true in "newest-cni-032535"
	I0114 03:26:48.688526   20464 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0114 03:26:48.725446   20464 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 03:26:48.762658   20464 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W0114 03:26:48.762671   20464 addons.go:236] addon default-storageclass should already be in state true
	I0114 03:26:48.762674   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0114 03:26:48.799641   20464 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 03:26:48.805765   20464 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0114 03:26:48.805802   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:48.836727   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 03:26:48.836741   20464 host.go:66] Checking if "newest-cni-032535" exists ...
	I0114 03:26:48.836884   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:48.858611   20464 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0114 03:26:48.836968   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:48.838754   20464 cli_runner.go:164] Run: docker container inspect newest-cni-032535 --format={{.State.Status}}
	I0114 03:26:48.879507   20464 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0114 03:26:48.879531   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0114 03:26:48.880248   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:48.920422   20464 api_server.go:51] waiting for apiserver process to appear ...
	I0114 03:26:48.920548   20464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 03:26:48.950451   20464 api_server.go:71] duration metric: took 488.051085ms to wait for apiserver process to appear ...
	I0114 03:26:48.950496   20464 api_server.go:87] waiting for apiserver healthz status ...
	I0114 03:26:48.950514   20464 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55191/healthz ...
	I0114 03:26:48.961017   20464 api_server.go:278] https://127.0.0.1:55191/healthz returned 200:
	ok
	I0114 03:26:48.961830   20464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55187 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:26:48.963815   20464 api_server.go:140] control plane version: v1.25.3
	I0114 03:26:48.963830   20464 api_server.go:130] duration metric: took 13.324228ms to wait for apiserver health ...
	I0114 03:26:48.963837   20464 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 03:26:48.972167   20464 system_pods.go:59] 8 kube-system pods found
	I0114 03:26:48.972186   20464 system_pods.go:61] "coredns-565d847f94-wwhpm" [d9cd76c9-2333-4dd8-977c-632d051bb7b9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0114 03:26:48.972197   20464 system_pods.go:61] "etcd-newest-cni-032535" [47ef02ab-bad3-41a1-b381-f0c86b6158ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0114 03:26:48.972211   20464 system_pods.go:61] "kube-apiserver-newest-cni-032535" [f2963c19-897a-4a7f-8613-966cb8030da9] Running
	I0114 03:26:48.972220   20464 system_pods.go:61] "kube-controller-manager-newest-cni-032535" [644b7f9f-9d48-467a-a06b-a82ea54d86ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0114 03:26:48.972226   20464 system_pods.go:61] "kube-proxy-r4244" [7f5a5549-a1d3-464b-8319-08d5cb751621] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0114 03:26:48.972235   20464 system_pods.go:61] "kube-scheduler-newest-cni-032535" [31d847ee-89c8-410d-ae31-ae7832da63ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0114 03:26:48.972242   20464 system_pods.go:61] "metrics-server-5c8fd5cf8-7h8zc" [edcfa060-e5f0-4ab7-8af4-dfedc7f392c3] Pending
	I0114 03:26:48.972249   20464 system_pods.go:61] "storage-provisioner" [14fc0378-c593-4659-ba9e-a57fb47e07f3] Running
	I0114 03:26:48.972256   20464 system_pods.go:74] duration metric: took 8.414058ms to wait for pod list to return data ...
	I0114 03:26:48.972265   20464 default_sa.go:34] waiting for default service account to be created ...
	I0114 03:26:48.976018   20464 default_sa.go:45] found service account: "default"
	I0114 03:26:48.976035   20464 default_sa.go:55] duration metric: took 3.763408ms for default service account to be created ...
	I0114 03:26:48.976056   20464 kubeadm.go:573] duration metric: took 513.663452ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0114 03:26:48.976075   20464 node_conditions.go:102] verifying NodePressure condition ...
	I0114 03:26:48.977952   20464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55187 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:26:48.981483   20464 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0114 03:26:48.981500   20464 node_conditions.go:123] node cpu capacity is 6
	I0114 03:26:48.981511   20464 node_conditions.go:105] duration metric: took 5.43044ms to run NodePressure ...
	I0114 03:26:48.981522   20464 start.go:217] waiting for startup goroutines ...
	I0114 03:26:48.991183   20464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55187 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:26:48.998042   20464 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 03:26:48.998064   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 03:26:48.998212   20464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-032535
	I0114 03:26:49.069032   20464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55187 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/newest-cni-032535/id_rsa Username:docker}
	I0114 03:26:49.154033   20464 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0114 03:26:49.154050   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0114 03:26:49.155714   20464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 03:26:49.155846   20464 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0114 03:26:49.155857   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0114 03:26:49.176839   20464 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0114 03:26:49.176857   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0114 03:26:49.176865   20464 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0114 03:26:49.176874   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0114 03:26:49.241906   20464 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0114 03:26:49.241929   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0114 03:26:49.243765   20464 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0114 03:26:49.243787   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0114 03:26:49.249669   20464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 03:26:49.259257   20464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0114 03:26:49.259697   20464 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0114 03:26:49.259707   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0114 03:26:49.341557   20464 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0114 03:26:49.341585   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0114 03:26:49.364249   20464 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0114 03:26:49.364271   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0114 03:26:49.441833   20464 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0114 03:26:49.441849   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0114 03:26:49.480335   20464 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0114 03:26:49.480351   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0114 03:26:49.546865   20464 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0114 03:26:49.546881   20464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0114 03:26:49.560490   20464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0114 03:26:50.371601   20464 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.21585241s)
	I0114 03:26:50.371655   20464 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.121948279s)
	I0114 03:26:50.371709   20464 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.112413117s)
	I0114 03:26:50.371724   20464 addons.go:457] Verifying addon metrics-server=true in "newest-cni-032535"
	I0114 03:26:50.571015   20464 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.010464456s)
	I0114 03:26:50.597595   20464 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-032535 addons enable metrics-server	
	
	
	I0114 03:26:50.618795   20464 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0114 03:26:50.640713   20464 addons.go:488] enableAddons completed in 2.178282266s
	I0114 03:26:50.641795   20464 ssh_runner.go:195] Run: rm -f paused
	I0114 03:26:50.690832   20464 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I0114 03:26:50.713475   20464 out.go:177] * Done! kubectl is now configured to use "newest-cni-032535" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-14 11:08:21 UTC, end at Sat 2023-01-14 11:35:17 UTC. --
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: Stopping Docker Application Container Engine...
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[129]: time="2023-01-14T11:08:24.399807470Z" level=info msg="Processing signal 'terminated'"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[129]: time="2023-01-14T11:08:24.400559208Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[129]: time="2023-01-14T11:08:24.400776500Z" level=info msg="Daemon shutdown complete"
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: docker.service: Succeeded.
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: Stopped Docker Application Container Engine.
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: Starting Docker Application Container Engine...
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.449671087Z" level=info msg="Starting up"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.451369542Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.451410150Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.451426946Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.451434106Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.452532702Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.452614323Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.452656109Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.452668984Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.456504219Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.460477786Z" level=info msg="Loading containers: start."
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.537877352Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.569431630Z" level=info msg="Loading containers: done."
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.577586702Z" level=info msg="Docker daemon" commit=3056208 graphdriver(s)=overlay2 version=20.10.21
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.577652554Z" level=info msg="Daemon has completed initialization"
	Jan 14 11:08:24 old-k8s-version-030235 systemd[1]: Started Docker Application Container Engine.
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.602452785Z" level=info msg="API listen on [::]:2376"
	Jan 14 11:08:24 old-k8s-version-030235 dockerd[424]: time="2023-01-14T11:08:24.605161476Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-01-14T11:35:19Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  11:35:19 up  1:34,  0 users,  load average: 0.35, 0.56, 0.83
	Linux old-k8s-version-030235 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-14 11:08:21 UTC, end at Sat 2023-01-14 11:35:19 UTC. --
	Jan 14 11:35:18 old-k8s-version-030235 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 14 11:35:18 old-k8s-version-030235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1667.
	Jan 14 11:35:18 old-k8s-version-030235 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 14 11:35:18 old-k8s-version-030235 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 14 11:35:18 old-k8s-version-030235 kubelet[34648]: I0114 11:35:18.753712   34648 server.go:410] Version: v1.16.0
	Jan 14 11:35:18 old-k8s-version-030235 kubelet[34648]: I0114 11:35:18.754440   34648 plugins.go:100] No cloud provider specified.
	Jan 14 11:35:18 old-k8s-version-030235 kubelet[34648]: I0114 11:35:18.754451   34648 server.go:773] Client rotation is on, will bootstrap in background
	Jan 14 11:35:18 old-k8s-version-030235 kubelet[34648]: I0114 11:35:18.756130   34648 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 14 11:35:18 old-k8s-version-030235 kubelet[34648]: W0114 11:35:18.756852   34648 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 14 11:35:18 old-k8s-version-030235 kubelet[34648]: W0114 11:35:18.756924   34648 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 14 11:35:18 old-k8s-version-030235 kubelet[34648]: F0114 11:35:18.756953   34648 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 14 11:35:18 old-k8s-version-030235 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 14 11:35:18 old-k8s-version-030235 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 14 11:35:19 old-k8s-version-030235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Jan 14 11:35:19 old-k8s-version-030235 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 14 11:35:19 old-k8s-version-030235 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 14 11:35:19 old-k8s-version-030235 kubelet[34678]: I0114 11:35:19.503233   34678 server.go:410] Version: v1.16.0
	Jan 14 11:35:19 old-k8s-version-030235 kubelet[34678]: I0114 11:35:19.503520   34678 plugins.go:100] No cloud provider specified.
	Jan 14 11:35:19 old-k8s-version-030235 kubelet[34678]: I0114 11:35:19.503537   34678 server.go:773] Client rotation is on, will bootstrap in background
	Jan 14 11:35:19 old-k8s-version-030235 kubelet[34678]: I0114 11:35:19.505397   34678 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 14 11:35:19 old-k8s-version-030235 kubelet[34678]: W0114 11:35:19.506118   34678 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 14 11:35:19 old-k8s-version-030235 kubelet[34678]: W0114 11:35:19.506187   34678 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 14 11:35:19 old-k8s-version-030235 kubelet[34678]: F0114 11:35:19.506216   34678 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 14 11:35:19 old-k8s-version-030235 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 14 11:35:19 old-k8s-version-030235 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 03:35:19.460387   21492 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-030235 -n old-k8s-version-030235
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-030235 -n old-k8s-version-030235: exit status 2 (395.935991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-030235" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.79s)

                                                
                                    

Test pass (262/296)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 24.85
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.25.3/json-events 16.66
11 TestDownloadOnly/v1.25.3/preload-exists 0
14 TestDownloadOnly/v1.25.3/kubectl 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.37
16 TestDownloadOnly/DeleteAll 0.67
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
18 TestDownloadOnlyKic 13.82
19 TestBinaryMirror 1.68
20 TestOffline 55.47
22 TestAddons/Setup 159.26
26 TestAddons/parallel/MetricsServer 5.58
27 TestAddons/parallel/HelmTiller 13.98
29 TestAddons/parallel/CSI 40.36
30 TestAddons/parallel/Headlamp 11.32
31 TestAddons/parallel/CloudSpanner 5.5
34 TestAddons/serial/GCPAuth/Namespaces 0.1
35 TestAddons/StoppedEnableDisable 12.89
36 TestCertOptions 33.32
37 TestCertExpiration 239.81
38 TestDockerFlags 34.99
39 TestForceSystemdFlag 36.41
40 TestForceSystemdEnv 35.71
42 TestHyperKitDriverInstallOrUpdate 9.11
45 TestErrorSpam/setup 28.34
46 TestErrorSpam/start 2.43
47 TestErrorSpam/status 1.24
48 TestErrorSpam/pause 1.79
49 TestErrorSpam/unpause 1.89
50 TestErrorSpam/stop 12.94
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 45.77
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 40.87
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.08
61 TestFunctional/serial/CacheCmd/cache/add_remote 8.57
62 TestFunctional/serial/CacheCmd/cache/add_local 1.65
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
64 TestFunctional/serial/CacheCmd/cache/list 0.08
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.99
67 TestFunctional/serial/CacheCmd/cache/delete 0.18
68 TestFunctional/serial/MinikubeKubectlCmd 0.51
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.68
70 TestFunctional/serial/ExtraConfig 46.49
71 TestFunctional/serial/ComponentHealth 0.06
72 TestFunctional/serial/LogsCmd 3.04
73 TestFunctional/serial/LogsFileCmd 3.08
75 TestFunctional/parallel/ConfigCmd 0.5
76 TestFunctional/parallel/DashboardCmd 13.51
77 TestFunctional/parallel/DryRun 1.38
78 TestFunctional/parallel/InternationalLanguage 0.64
79 TestFunctional/parallel/StatusCmd 1.53
82 TestFunctional/parallel/ServiceCmd 21.84
84 TestFunctional/parallel/AddonsCmd 0.28
85 TestFunctional/parallel/PersistentVolumeClaim 28.27
87 TestFunctional/parallel/SSHCmd 0.79
88 TestFunctional/parallel/CpCmd 2.02
89 TestFunctional/parallel/MySQL 34.47
90 TestFunctional/parallel/FileSync 0.42
91 TestFunctional/parallel/CertSync 2.81
95 TestFunctional/parallel/NodeLabels 0.06
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
99 TestFunctional/parallel/License 0.88
100 TestFunctional/parallel/Version/short 0.13
101 TestFunctional/parallel/Version/components 0.68
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.4
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
106 TestFunctional/parallel/ImageCommands/ImageBuild 4.99
107 TestFunctional/parallel/ImageCommands/Setup 3.14
108 TestFunctional/parallel/DockerEnv/bash 1.91
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.34
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.47
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.38
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.74
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.66
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.33
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.97
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.92
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.69
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.2
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.19
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
130 TestFunctional/parallel/ProfileCmd/profile_list 0.55
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
132 TestFunctional/parallel/MountCmd/any-port 9.69
133 TestFunctional/parallel/MountCmd/specific-port 2.54
134 TestFunctional/delete_addon-resizer_images 0.15
135 TestFunctional/delete_my-image_image 0.06
136 TestFunctional/delete_minikube_cached_images 0.06
146 TestJSONOutput/start/Command 44.51
147 TestJSONOutput/start/Audit 0
149 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
150 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
152 TestJSONOutput/pause/Command 0.66
153 TestJSONOutput/pause/Audit 0
155 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
156 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
158 TestJSONOutput/unpause/Command 0.6
159 TestJSONOutput/unpause/Audit 0
161 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/stop/Command 12.23
165 TestJSONOutput/stop/Audit 0
167 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
169 TestErrorJSONOutput 0.79
171 TestKicCustomNetwork/create_custom_network 30.53
172 TestKicCustomNetwork/use_default_bridge_network 31.2
173 TestKicExistingNetwork 32.16
174 TestKicCustomSubnet 30.89
175 TestKicStaticIP 31.46
176 TestMainNoArgs 0.08
177 TestMinikubeProfile 63.93
180 TestMountStart/serial/StartWithMountFirst 7.65
181 TestMountStart/serial/VerifyMountFirst 0.4
182 TestMountStart/serial/StartWithMountSecond 7.51
183 TestMountStart/serial/VerifyMountSecond 0.4
184 TestMountStart/serial/DeleteFirst 2.15
185 TestMountStart/serial/VerifyMountPostDelete 0.39
186 TestMountStart/serial/Stop 1.58
187 TestMountStart/serial/RestartStopped 5.25
188 TestMountStart/serial/VerifyMountPostStop 0.4
191 TestMultiNode/serial/FreshStart2Nodes 88.25
192 TestMultiNode/serial/DeployApp2Nodes 6.18
193 TestMultiNode/serial/PingHostFrom2Pods 0.89
194 TestMultiNode/serial/AddNode 27.94
195 TestMultiNode/serial/ProfileList 0.43
196 TestMultiNode/serial/CopyFile 14.73
197 TestMultiNode/serial/StopNode 13.72
198 TestMultiNode/serial/StartAfterStop 19.35
200 TestMultiNode/serial/DeleteNode 7.8
201 TestMultiNode/serial/StopMultiNode 24.85
202 TestMultiNode/serial/RestartMultiNode 80.54
203 TestMultiNode/serial/ValidateNameConflict 33.39
207 TestPreload 161.24
209 TestScheduledStopUnix 103.83
210 TestSkaffold 69.99
212 TestInsufficientStorage 14.28
228 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 18.41
229 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 21.12
230 TestStoppedBinaryUpgrade/Setup 3.53
232 TestStoppedBinaryUpgrade/MinikubeLogs 3.57
241 TestPause/serial/Start 43.91
242 TestPause/serial/SecondStartNoReconfiguration 39.03
243 TestPause/serial/Pause 0.74
244 TestPause/serial/VerifyStatus 0.41
245 TestPause/serial/Unpause 0.68
246 TestPause/serial/PauseAgain 0.8
247 TestPause/serial/DeletePaused 2.61
248 TestPause/serial/VerifyDeletedResources 0.57
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.38
251 TestNoKubernetes/serial/StartWithK8s 29.76
252 TestNoKubernetes/serial/StartWithStopK8s 17.07
253 TestNoKubernetes/serial/Start 6.42
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
255 TestNoKubernetes/serial/ProfileList 1.34
256 TestNoKubernetes/serial/Stop 1.59
257 TestNoKubernetes/serial/StartNoArgs 4.23
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
259 TestNetworkPlugins/group/auto/Start 44.21
260 TestNetworkPlugins/group/auto/KubeletFlags 0.41
261 TestNetworkPlugins/group/auto/NetCatPod 14.2
262 TestNetworkPlugins/group/auto/DNS 0.12
263 TestNetworkPlugins/group/auto/Localhost 0.11
264 TestNetworkPlugins/group/auto/HairPin 5.11
265 TestNetworkPlugins/group/kindnet/Start 51.79
266 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
267 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
268 TestNetworkPlugins/group/kindnet/NetCatPod 14.22
269 TestNetworkPlugins/group/kindnet/DNS 0.12
270 TestNetworkPlugins/group/kindnet/Localhost 0.11
271 TestNetworkPlugins/group/kindnet/HairPin 0.12
272 TestNetworkPlugins/group/cilium/Start 101.64
273 TestNetworkPlugins/group/calico/Start 329.15
274 TestNetworkPlugins/group/cilium/ControllerPod 5.02
275 TestNetworkPlugins/group/cilium/KubeletFlags 0.43
276 TestNetworkPlugins/group/cilium/NetCatPod 14.64
277 TestNetworkPlugins/group/cilium/DNS 0.12
278 TestNetworkPlugins/group/cilium/Localhost 0.11
279 TestNetworkPlugins/group/cilium/HairPin 0.12
280 TestNetworkPlugins/group/false/Start 49.95
281 TestNetworkPlugins/group/false/KubeletFlags 0.41
282 TestNetworkPlugins/group/false/NetCatPod 13.25
283 TestNetworkPlugins/group/false/DNS 0.12
284 TestNetworkPlugins/group/false/Localhost 0.13
285 TestNetworkPlugins/group/false/HairPin 5.11
286 TestNetworkPlugins/group/bridge/Start 92.64
287 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
288 TestNetworkPlugins/group/bridge/NetCatPod 15.18
289 TestNetworkPlugins/group/bridge/DNS 0.12
290 TestNetworkPlugins/group/bridge/Localhost 0.11
291 TestNetworkPlugins/group/bridge/HairPin 0.11
292 TestNetworkPlugins/group/enable-default-cni/Start 54.06
293 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
294 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.21
295 TestNetworkPlugins/group/calico/ControllerPod 5.02
296 TestNetworkPlugins/group/calico/KubeletFlags 0.41
297 TestNetworkPlugins/group/calico/NetCatPod 14.23
298 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
299 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
300 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
301 TestNetworkPlugins/group/kubenet/Start 49.15
302 TestNetworkPlugins/group/calico/DNS 0.16
303 TestNetworkPlugins/group/calico/Localhost 0.14
304 TestNetworkPlugins/group/calico/HairPin 0.15
307 TestNetworkPlugins/group/kubenet/KubeletFlags 0.41
308 TestNetworkPlugins/group/kubenet/NetCatPod 13.18
309 TestNetworkPlugins/group/kubenet/DNS 0.12
310 TestNetworkPlugins/group/kubenet/Localhost 0.11
313 TestStartStop/group/no-preload/serial/FirstStart 57.31
314 TestStartStop/group/no-preload/serial/DeployApp 9.26
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.88
316 TestStartStop/group/no-preload/serial/Stop 12.41
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.38
318 TestStartStop/group/no-preload/serial/SecondStart 300.74
321 TestStartStop/group/old-k8s-version/serial/Stop 1.61
322 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.39
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 21.02
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
326 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.43
327 TestStartStop/group/no-preload/serial/Pause 3.44
329 TestStartStop/group/embed-certs/serial/FirstStart 82.97
330 TestStartStop/group/embed-certs/serial/DeployApp 9.28
331 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.88
332 TestStartStop/group/embed-certs/serial/Stop 12.39
333 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.39
334 TestStartStop/group/embed-certs/serial/SecondStart 303.37
336 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.02
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
338 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.48
339 TestStartStop/group/embed-certs/serial/Pause 3.28
341 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.88
342 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.27
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.83
344 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.39
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.4
346 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 298.25
347 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 22.01
348 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
349 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.43
350 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.3
352 TestStartStop/group/newest-cni/serial/FirstStart 43.48
354 TestStartStop/group/newest-cni/serial/DeployApp 0
355 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
356 TestStartStop/group/newest-cni/serial/Stop 12.39
357 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.39
358 TestStartStop/group/newest-cni/serial/SecondStart 18.64
359 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
360 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
361 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.46
362 TestStartStop/group/newest-cni/serial/Pause 3.52
x
+
TestDownloadOnly/v1.16.0/json-events (24.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-020520 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-020520 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (24.84849147s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (24.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-020520
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-020520: exit status 85 (297.871007ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-020520 | jenkins | v1.28.0 | 14 Jan 23 02:05 PST |          |
	|         | -p download-only-020520        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 02:05:20
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 02:05:20.846622    2730 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:05:20.846889    2730 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:05:20.846896    2730 out.go:309] Setting ErrFile to fd 2...
	I0114 02:05:20.846900    2730 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:05:20.847003    2730 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	W0114 02:05:20.847119    2730 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15642-1559/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15642-1559/.minikube/config/config.json: no such file or directory
	I0114 02:05:20.847871    2730 out.go:303] Setting JSON to true
	I0114 02:05:20.866632    2730 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":294,"bootTime":1673690426,"procs":380,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 02:05:20.866745    2730 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 02:05:20.889433    2730 out.go:97] [download-only-020520] minikube v1.28.0 on Darwin 13.0.1
	I0114 02:05:20.889604    2730 notify.go:220] Checking for updates...
	W0114 02:05:20.889725    2730 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball: no such file or directory
	I0114 02:05:20.909933    2730 out.go:169] MINIKUBE_LOCATION=15642
	I0114 02:05:20.931267    2730 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:05:20.953288    2730 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:05:20.975214    2730 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:05:20.997235    2730 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	W0114 02:05:21.039938    2730 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0114 02:05:21.040403    2730 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 02:05:21.099866    2730 docker.go:138] docker version: linux-20.10.21
	I0114 02:05:21.100002    2730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:05:21.238724    2730 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2023-01-14 10:05:21.148495956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:05:21.260964    2730 out.go:97] Using the docker driver based on user configuration
	I0114 02:05:21.261061    2730 start.go:294] selected driver: docker
	I0114 02:05:21.261076    2730 start.go:838] validating driver "docker" against <nil>
	I0114 02:05:21.261336    2730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:05:21.399636    2730 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2023-01-14 10:05:21.311405046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:05:21.399748    2730 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 02:05:21.403986    2730 start_flags.go:386] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0114 02:05:21.404104    2730 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0114 02:05:21.425510    2730 out.go:169] Using Docker Desktop driver with root privileges
	I0114 02:05:21.447281    2730 cni.go:95] Creating CNI manager for ""
	I0114 02:05:21.447311    2730 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 02:05:21.447333    2730 start_flags.go:319] config:
	{Name:download-only-020520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-020520 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:05:21.469421    2730 out.go:97] Starting control plane node download-only-020520 in cluster download-only-020520
	I0114 02:05:21.469467    2730 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 02:05:21.491304    2730 out.go:97] Pulling base image ...
	I0114 02:05:21.491375    2730 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 02:05:21.491459    2730 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 02:05:21.546358    2730 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0114 02:05:21.546589    2730 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory
	I0114 02:05:21.546733    2730 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0114 02:05:21.603874    2730 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0114 02:05:21.603908    2730 cache.go:57] Caching tarball of preloaded images
	I0114 02:05:21.604267    2730 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 02:05:21.626132    2730 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0114 02:05:21.626200    2730 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0114 02:05:21.874550    2730 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0114 02:05:40.316839    2730 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0114 02:05:40.317025    2730 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-020520"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (16.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-020520 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-020520 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker : (16.656987529s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (16.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
--- PASS: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-020520
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-020520: exit status 85 (369.842879ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-020520 | jenkins | v1.28.0 | 14 Jan 23 02:05 PST |          |
	|         | -p download-only-020520        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-020520 | jenkins | v1.28.0 | 14 Jan 23 02:05 PST |          |
	|         | -p download-only-020520        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 02:05:45
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 02:05:45.997120    2781 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:05:45.997368    2781 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:05:45.997375    2781 out.go:309] Setting ErrFile to fd 2...
	I0114 02:05:45.997382    2781 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:05:45.997513    2781 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	W0114 02:05:45.997613    2781 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15642-1559/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15642-1559/.minikube/config/config.json: no such file or directory
	I0114 02:05:45.997979    2781 out.go:303] Setting JSON to true
	I0114 02:05:46.016917    2781 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":320,"bootTime":1673690426,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 02:05:46.017010    2781 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 02:05:46.039702    2781 out.go:97] [download-only-020520] minikube v1.28.0 on Darwin 13.0.1
	I0114 02:05:46.039948    2781 notify.go:220] Checking for updates...
	I0114 02:05:46.061266    2781 out.go:169] MINIKUBE_LOCATION=15642
	I0114 02:05:46.082630    2781 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:05:46.104625    2781 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:05:46.126462    2781 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:05:46.148420    2781 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	W0114 02:05:46.191100    2781 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0114 02:05:46.191841    2781 config.go:180] Loaded profile config "download-only-020520": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0114 02:05:46.191937    2781 start.go:746] api.Load failed for download-only-020520: filestore "download-only-020520": Docker machine "download-only-020520" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0114 02:05:46.192020    2781 driver.go:365] Setting default libvirt URI to qemu:///system
	W0114 02:05:46.192066    2781 start.go:746] api.Load failed for download-only-020520: filestore "download-only-020520": Docker machine "download-only-020520" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0114 02:05:46.251613    2781 docker.go:138] docker version: linux-20.10.21
	I0114 02:05:46.251744    2781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:05:46.389047    2781 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2023-01-14 10:05:46.299969375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:05:46.410892    2781 out.go:97] Using the docker driver based on existing profile
	I0114 02:05:46.410927    2781 start.go:294] selected driver: docker
	I0114 02:05:46.410934    2781 start.go:838] validating driver "docker" against &{Name:download-only-020520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-020520 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/so
cket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:05:46.411140    2781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:05:46.550770    2781 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2023-01-14 10:05:46.460727446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:05:46.553217    2781 cni.go:95] Creating CNI manager for ""
	I0114 02:05:46.553234    2781 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 02:05:46.553250    2781 start_flags.go:319] config:
	{Name:download-only-020520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-020520 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticI
P:}
	I0114 02:05:46.575154    2781 out.go:97] Starting control plane node download-only-020520 in cluster download-only-020520
	I0114 02:05:46.575228    2781 cache.go:120] Beginning downloading kic base image for docker with docker
	I0114 02:05:46.596816    2781 out.go:97] Pulling base image ...
	I0114 02:05:46.596945    2781 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:05:46.597022    2781 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local docker daemon
	I0114 02:05:46.651088    2781 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c to local cache
	I0114 02:05:46.651248    2781 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory
	I0114 02:05:46.651271    2781 image.go:64] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c in local cache directory, skipping pull
	I0114 02:05:46.651276    2781 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c exists in cache, skipping pull
	I0114 02:05:46.651285    2781 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c as a tarball
	I0114 02:05:46.692673    2781 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 02:05:46.692707    2781 cache.go:57] Caching tarball of preloaded images
	I0114 02:05:46.693100    2781 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:05:46.715019    2781 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I0114 02:05:46.715120    2781 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0114 02:05:46.946575    2781 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 02:05:58.794639    2781 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0114 02:05:58.794854    2781 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0114 02:05:59.376007    2781 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 02:05:59.376138    2781 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/download-only-020520/config.json ...
	I0114 02:05:59.376599    2781 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 02:05:59.376868    2781 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/15642-1559/.minikube/cache/darwin/amd64/v1.25.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-020520"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.37s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.67s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-020520
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestDownloadOnlyKic (13.82s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-020604 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-020604 --force --alsologtostderr --driver=docker : (12.680150825s)
helpers_test.go:175: Cleaning up "download-docker-020604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-020604
--- PASS: TestDownloadOnlyKic (13.82s)

                                                
                                    
x
+
TestBinaryMirror (1.68s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-020618 --alsologtostderr --binary-mirror http://127.0.0.1:49444 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-020618 --alsologtostderr --binary-mirror http://127.0.0.1:49444 --driver=docker : (1.068020333s)
helpers_test.go:175: Cleaning up "binary-mirror-020618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-020618
--- PASS: TestBinaryMirror (1.68s)

                                                
                                    
x
+
TestOffline (55.47s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-024325 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-024325 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (52.380977539s)
helpers_test.go:175: Cleaning up "offline-docker-024325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-024325
E0114 02:44:19.832830    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-024325: (3.085203307s)
--- PASS: TestOffline (55.47s)

                                                
                                    
x
+
TestAddons/Setup (159.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-020619 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p addons-020619 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m39.259438898s)
--- PASS: TestAddons/Setup (159.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:364: metrics-server stabilized in 2.287229ms
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-56c6cfbdd9-6vbcl" [66fa9da4-d6a0-4157-a4c6-535b9839cb4a] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007484762s
addons_test.go:372: (dbg) Run:  kubectl --context addons-020619 top pods -n kube-system
addons_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020619 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.58s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.98s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:413: tiller-deploy stabilized in 2.557369ms
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-jfzqk" [5494d40d-c622-4fd1-a957-df71b1de11f2] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008956824s
addons_test.go:430: (dbg) Run:  kubectl --context addons-020619 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:430: (dbg) Done: kubectl --context addons-020619 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.449249351s)
addons_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020619 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:518: csi-hostpath-driver pods stabilized in 5.255495ms
addons_test.go:521: (dbg) Run:  kubectl --context addons-020619 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:526: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-020619 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) Run:  kubectl --context addons-020619 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:536: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [32c62940-964d-4fc5-aac6-34ceeab399f8] Pending
helpers_test.go:342: "task-pv-pod" [32c62940-964d-4fc5-aac6-34ceeab399f8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod" [32c62940-964d-4fc5-aac6-34ceeab399f8] Running
addons_test.go:536: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.008290661s
addons_test.go:541: (dbg) Run:  kubectl --context addons-020619 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:546: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-020619 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-020619 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:551: (dbg) Run:  kubectl --context addons-020619 delete pod task-pv-pod
addons_test.go:557: (dbg) Run:  kubectl --context addons-020619 delete pvc hpvc
addons_test.go:563: (dbg) Run:  kubectl --context addons-020619 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-020619 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-020619 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [1639d71a-8ce2-4c29-b007-e293562184a5] Pending
helpers_test.go:342: "task-pv-pod-restore" [1639d71a-8ce2-4c29-b007-e293562184a5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [1639d71a-8ce2-4c29-b007-e293562184a5] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.008357441s
addons_test.go:583: (dbg) Run:  kubectl --context addons-020619 delete pod task-pv-pod-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-020619 delete pvc hpvc-restore
addons_test.go:591: (dbg) Run:  kubectl --context addons-020619 delete volumesnapshot new-snapshot-demo
addons_test.go:595: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020619 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:595: (dbg) Done: out/minikube-darwin-amd64 -p addons-020619 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.954655095s)
addons_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020619 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.36s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:774: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-020619 --alsologtostderr -v=1
addons_test.go:774: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-020619 --alsologtostderr -v=1: (1.293992548s)
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-764769c887-cq4h2" [00d8532f-71a4-4cdd-81f7-0327fab0d9dd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-764769c887-cq4h2" [00d8532f-71a4-4cdd-81f7-0327fab0d9dd] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.023731042s
--- PASS: TestAddons/parallel/Headlamp (11.32s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:342: "cloud-spanner-emulator-7d7766f55c-dh99p" [e97fc9aa-b9b4-4a38-8e05-7bb45b4bb9c6] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009248282s
addons_test.go:798: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-020619
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:607: (dbg) Run:  kubectl --context addons-020619 create ns new-namespace
addons_test.go:621: (dbg) Run:  kubectl --context addons-020619 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.89s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:139: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-020619
addons_test.go:139: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-020619: (12.439640427s)
addons_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-020619
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-020619
--- PASS: TestAddons/StoppedEnableDisable (12.89s)

                                                
                                    
x
+
TestCertOptions (33.32s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-024526 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-024526 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (29.886999398s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-024526 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-024526 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-024526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-024526
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-024526: (2.562873687s)
--- PASS: TestCertOptions (33.32s)

                                                
                                    
x
+
TestCertExpiration (239.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-024457 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-024457 --memory=2048 --cert-expiration=3m --driver=docker : (31.616384602s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-024457 --memory=2048 --cert-expiration=8760h --driver=docker 
E0114 02:48:39.375484    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-024457 --memory=2048 --cert-expiration=8760h --driver=docker : (25.560276363s)
helpers_test.go:175: Cleaning up "cert-expiration-024457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-024457
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-024457: (2.631133216s)
--- PASS: TestCertExpiration (239.81s)

                                                
                                    
x
+
TestDockerFlags (34.99s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-024451 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-024451 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (31.320597216s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-024451 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-024451 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-024451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-024451
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-024451: (2.778099962s)
--- PASS: TestDockerFlags (34.99s)

                                                
                                    
x
+
TestForceSystemdFlag (36.41s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-024421 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-024421 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (32.771786563s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-024421 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-024421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-024421
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-024421: (3.060068106s)
--- PASS: TestForceSystemdFlag (36.41s)

                                                
                                    
x
+
TestForceSystemdEnv (35.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-024415 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-024415 --memory=2048 --alsologtostderr -v=5 --driver=docker : (32.523878584s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-024415 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-024415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-024415
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-024415: (2.667560135s)
--- PASS: TestForceSystemdEnv (35.71s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.11s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.11s)

                                                
                                    
x
+
TestErrorSpam/setup (28.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-021046 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-021046 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 --driver=docker : (28.341801951s)
--- PASS: TestErrorSpam/setup (28.34s)

                                                
                                    
x
+
TestErrorSpam/start (2.43s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 start --dry-run
--- PASS: TestErrorSpam/start (2.43s)

                                                
                                    
x
+
TestErrorSpam/status (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 status
--- PASS: TestErrorSpam/status (1.24s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 unpause
--- PASS: TestErrorSpam/unpause (1.89s)

                                                
                                    
x
+
TestErrorSpam/stop (12.94s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 stop: (12.288953803s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-021046 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-021046 stop
--- PASS: TestErrorSpam/stop (12.94s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/15642-1559/.minikube/files/etc/test/nested/copy/2728/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-021137 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-021137 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (45.774075212s)
--- PASS: TestFunctional/serial/StartWithProxy (45.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-021137 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-021137 --alsologtostderr -v=8: (40.867011939s)
functional_test.go:656: soft start took 40.86753099s for "functional-021137" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-021137 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 cache add k8s.gcr.io/pause:3.1: (2.967534959s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 cache add k8s.gcr.io/pause:3.3: (2.929034886s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 cache add k8s.gcr.io/pause:latest: (2.668514531s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-021137 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1545652694/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 cache add minikube-local-cache-test:functional-021137
functional_test.go:1082: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 cache add minikube-local-cache-test:functional-021137: (1.114009746s)
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 cache delete minikube-local-cache-test:functional-021137
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-021137
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021137 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (392.81213ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 cache reload: (1.736772817s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 kubectl -- --context functional-021137 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-021137 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.68s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-021137 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0114 02:13:59.192878    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:13:59.199373    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:13:59.209549    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:13:59.231248    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:13:59.271768    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:13:59.353020    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:13:59.513337    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:13:59.834785    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:14:00.475211    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:14:01.756657    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:14:04.318952    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-021137 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.492856236s)
functional_test.go:754: restart took 46.492993619s for "functional-021137" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (46.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-021137 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.04s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 logs
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 logs: (3.037177755s)
--- PASS: TestFunctional/serial/LogsCmd (3.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2842930307/001/logs.txt
E0114 02:14:09.440342    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2842930307/001/logs.txt: (3.082537299s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021137 config get cpus: exit status 14 (59.112485ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021137 config get cpus: exit status 14 (65.4016ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-021137 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-021137 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 5393: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-021137 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-021137 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (630.276582ms)

                                                
                                                
-- stdout --
	* [functional-021137] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 02:15:30.210150    5303 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:15:30.210341    5303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:15:30.210348    5303 out.go:309] Setting ErrFile to fd 2...
	I0114 02:15:30.210352    5303 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:15:30.210468    5303 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 02:15:30.210971    5303 out.go:303] Setting JSON to false
	I0114 02:15:30.229894    5303 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":904,"bootTime":1673690426,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 02:15:30.229977    5303 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 02:15:30.252676    5303 out.go:177] * [functional-021137] minikube v1.28.0 on Darwin 13.0.1
	I0114 02:15:30.295514    5303 notify.go:220] Checking for updates...
	I0114 02:15:30.317184    5303 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 02:15:30.338425    5303 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:15:30.359561    5303 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:15:30.381189    5303 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:15:30.402511    5303 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	I0114 02:15:30.425044    5303 config.go:180] Loaded profile config "functional-021137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:15:30.425772    5303 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 02:15:30.488643    5303 docker.go:138] docker version: linux-20.10.21
	I0114 02:15:30.488792    5303 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:15:30.629236    5303 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 10:15:30.540287566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:15:30.651466    5303 out.go:177] * Using the docker driver based on existing profile
	I0114 02:15:30.672910    5303 start.go:294] selected driver: docker
	I0114 02:15:30.672930    5303 start.go:838] validating driver "docker" against &{Name:functional-021137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-021137 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fals
e portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:15:30.673141    5303 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 02:15:30.697086    5303 out.go:177] 
	W0114 02:15:30.718927    5303 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0114 02:15:30.739995    5303 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-021137 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-021137 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-021137 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (642.610074ms)

                                                
                                                
-- stdout --
	* [functional-021137] minikube v1.28.0 sur Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 02:15:31.583002    5343 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:15:31.583153    5343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:15:31.583159    5343 out.go:309] Setting ErrFile to fd 2...
	I0114 02:15:31.583164    5343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:15:31.583287    5343 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 02:15:31.583751    5343 out.go:303] Setting JSON to false
	I0114 02:15:31.602887    5343 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":905,"bootTime":1673690426,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0114 02:15:31.602973    5343 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0114 02:15:31.624966    5343 out.go:177] * [functional-021137] minikube v1.28.0 sur Darwin 13.0.1
	I0114 02:15:31.667656    5343 notify.go:220] Checking for updates...
	I0114 02:15:31.688781    5343 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 02:15:31.710069    5343 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	I0114 02:15:31.731819    5343 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0114 02:15:31.752971    5343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 02:15:31.774797    5343 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	I0114 02:15:31.796600    5343 config.go:180] Loaded profile config "functional-021137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:15:31.797347    5343 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 02:15:31.859062    5343 docker.go:138] docker version: linux-20.10.21
	I0114 02:15:31.859208    5343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0114 02:15:31.999996    5343 info.go:266] docker info: {ID:VF6S:3GIL:4JQH:LPDQ:6EC6:D32C:6RZ7:IA3N:LZ7R:3YN2:QUOM:SIJ5 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-14 10:15:31.909334187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I0114 02:15:32.043182    5343 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0114 02:15:32.064202    5343 start.go:294] selected driver: docker
	I0114 02:15:32.064233    5343 start.go:838] validating driver "docker" against &{Name:functional-021137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-021137 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:fals
e portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 02:15:32.064405    5343 start.go:849] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 02:15:32.090240    5343 out.go:177] 
	W0114 02:15:32.111120    5343 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0114 02:15:32.132011    5343 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 status
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (21.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-021137 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-021137 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-gztzj" [89f85acf-1ffd-4547-a0eb-fda7f1c0c2aa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-gztzj" [89f85acf-1ffd-4547-a0eb-fda7f1c0c2aa] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 15.008949441s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 service list
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 service --namespace=default --https --url hello-node
functional_test.go:1463: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 service --namespace=default --https --url hello-node: (2.029704981s)
functional_test.go:1476: found endpoint: https://127.0.0.1:50324
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1491: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 service hello-node --url --format={{.IP}}: (2.030125144s)
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 service hello-node --url
functional_test.go:1505: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 service hello-node --url: (2.028982551s)
functional_test.go:1511: found endpoint for hello-node: http://127.0.0.1:50338
--- PASS: TestFunctional/parallel/ServiceCmd (21.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [ecfc0dd0-a4e5-49d7-9007-8aa2d388a08b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009559604s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-021137 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-021137 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-021137 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-021137 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [823315be-1aa0-4c27-bae6-e24791245f1b] Pending
helpers_test.go:342: "sp-pod" [823315be-1aa0-4c27-bae6-e24791245f1b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [823315be-1aa0-4c27-bae6-e24791245f1b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.007006164s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-021137 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-021137 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-021137 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [a6ce4127-ba22-4f4c-a980-d066f8936d4c] Pending
helpers_test.go:342: "sp-pod" [a6ce4127-ba22-4f4c-a980-d066f8936d4c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [a6ce4127-ba22-4f4c-a980-d066f8936d4c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007657692s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-021137 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.27s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh -n functional-021137 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 cp functional-021137:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd1389047170/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh -n functional-021137 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-021137 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-596b7fcdbf-wjc28" [58635795-2091-4505-b6bb-0b9f2bc667ff] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-wjc28" [58635795-2091-4505-b6bb-0b9f2bc667ff] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.017142253s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-021137 exec mysql-596b7fcdbf-wjc28 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-021137 exec mysql-596b7fcdbf-wjc28 -- mysql -ppassword -e "show databases;": exit status 1 (286.613624ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-021137 exec mysql-596b7fcdbf-wjc28 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-021137 exec mysql-596b7fcdbf-wjc28 -- mysql -ppassword -e "show databases;": exit status 1 (165.909965ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-021137 exec mysql-596b7fcdbf-wjc28 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-021137 exec mysql-596b7fcdbf-wjc28 -- mysql -ppassword -e "show databases;": exit status 1 (166.451667ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-021137 exec mysql-596b7fcdbf-wjc28 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-021137 exec mysql-596b7fcdbf-wjc28 -- mysql -ppassword -e "show databases;": exit status 1 (112.511154ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-021137 exec mysql-596b7fcdbf-wjc28 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.47s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/2728/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "sudo cat /etc/test/nested/copy/2728/hosts"
E0114 02:14:19.682461    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/2728.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "sudo cat /etc/ssl/certs/2728.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/2728.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "sudo cat /usr/share/ca-certificates/2728.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/27282.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "sudo cat /etc/ssl/certs/27282.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/27282.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "sudo cat /usr/share/ca-certificates/27282.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-021137 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021137 ssh "sudo systemctl is-active crio": exit status 1 (555.775461ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-021137 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-021137
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-021137
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-021137 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-021137 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/nginx                     | alpine            | c433c51bbd661 | 40.7MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-021137 | 42158ceb29fc6 | 30B    |
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| docker.io/library/mysql                     | 5.7               | d410f4167eea9 | 495MB  |
| docker.io/localhost/my-image                | functional-021137 | 39aaf851776b8 | 1.24MB |
|---------------------------------------------|-------------------|---------------|--------|
2023/01/14 02:15:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-021137 image ls --format json:
[{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-021137"],"size":"32900000"},{"id":"39aaf851776b8f1dae11bae4a370d0c380e964934c60d09b809442de97f4dad0","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-021137"],"size":"1240000"},{"id":"42158ceb29fc6856d2eaf4b6cab11743aa090c2b4b49cbafbdafd429e5350be5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-021137"],"size":"30"},{"id":"0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"
4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"d410f4167eea912908b2f9bcc24eff870cb3c131dfb755088b79a4188bfeb40f","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"495000000"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"rep
oTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da
11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-021137 image ls --format yaml:
- id: d410f4167eea912908b2f9bcc24eff870cb3c131dfb755088b79a4188bfeb40f
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "495000000"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-021137
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 42158ceb29fc6856d2eaf4b6cab11743aa090c2b4b49cbafbdafd429e5350be5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-021137
size: "30"
- id: c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021137 ssh pgrep buildkitd: exit status 1 (377.178065ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image build -t localhost/my-image:functional-021137 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 image build -t localhost/my-image:functional-021137 testdata/build: (4.22845884s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-021137 image build -t localhost/my-image:functional-021137 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 0c420e2ab190
Removing intermediate container 0c420e2ab190
---> 59c7d697faaf
Step 3/3 : ADD content.txt /
---> 39aaf851776b
Successfully built 39aaf851776b
Successfully tagged localhost/my-image:functional-021137
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.073529003s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-021137
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-021137 docker-env) && out/minikube-darwin-amd64 status -p functional-021137"
functional_test.go:492: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-021137 docker-env) && out/minikube-darwin-amd64 status -p functional-021137": (1.241021074s)
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-021137 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image load --daemon gcr.io/google-containers/addon-resizer:functional-021137

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 image load --daemon gcr.io/google-containers/addon-resizer:functional-021137: (3.428341448s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image load --daemon gcr.io/google-containers/addon-resizer:functional-021137
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 image load --daemon gcr.io/google-containers/addon-resizer:functional-021137: (2.242385744s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.524840092s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-021137
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image load --daemon gcr.io/google-containers/addon-resizer:functional-021137
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 image load --daemon gcr.io/google-containers/addon-resizer:functional-021137: (4.33947372s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image save gcr.io/google-containers/addon-resizer:functional-021137 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 image save gcr.io/google-containers/addon-resizer:functional-021137 /Users/jenkins/workspace/addon-resizer-save.tar: (1.973987178s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image rm gcr.io/google-containers/addon-resizer:functional-021137
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 image load /Users/jenkins/workspace/addon-resizer-save.tar: (2.276427498s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-021137
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 image save --daemon gcr.io/google-containers/addon-resizer:functional-021137
E0114 02:14:40.163694    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-021137 image save --daemon gcr.io/google-containers/addon-resizer:functional-021137: (3.077742399s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-021137
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-021137 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-021137 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [bb7ab7e1-3780-4631-98e6-db83a2dc0795] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [bb7ab7e1-3780-4631-98e6-db83a2dc0795] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.046126674s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-021137 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-021137 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 4980: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "464.844921ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "80.388379ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "420.729653ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "81.013562ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-021137 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port846201193/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1673691317923075000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port846201193/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1673691317923075000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port846201193/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1673691317923075000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port846201193/001/test-1673691317923075000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021137 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (397.342248ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 14 10:15 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 14 10:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 14 10:15 test-1673691317923075000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh cat /mount-9p/test-1673691317923075000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-021137 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [7aa6e3b3-0b99-4f16-a2c3-6a3a2d886c58] Pending
E0114 02:15:21.124818    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
helpers_test.go:342: "busybox-mount" [7aa6e3b3-0b99-4f16-a2c3-6a3a2d886c58] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:342: "busybox-mount" [7aa6e3b3-0b99-4f16-a2c3-6a3a2d886c58] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [7aa6e3b3-0b99-4f16-a2c3-6a3a2d886c58] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.006940848s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-021137 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-021137 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port846201193/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-021137 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port3301065674/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021137 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (388.432778ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-021137 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port3301065674/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-021137 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-021137 ssh "sudo umount -f /mount-9p": exit status 1 (371.260314ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-021137 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-021137 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port3301065674/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.54s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-021137
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-021137
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-021137
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.51s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-022319 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0114 02:23:59.147637    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-022319 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (44.509795787s)
--- PASS: TestJSONOutput/start/Command (44.51s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-022319 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-022319 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.23s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-022319 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-022319 --output=json --user=testUser: (12.23084826s)
--- PASS: TestJSONOutput/stop/Command (12.23s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.79s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-022420 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-022420 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (338.453815ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a328d675-289c-4954-85cf-9907dd471d0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-022420] minikube v1.28.0 on Darwin 13.0.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"30f99fe9-269d-4b2e-b35c-9a416ce66e46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15642"}}
	{"specversion":"1.0","id":"6ad37731-df3d-4929-941c-7b7cf53d3e44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig"}}
	{"specversion":"1.0","id":"e3f5fadd-7b26-4ad3-8b9f-eb2eb1946176","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"791e4ce7-2f3f-449e-9147-ddc4e10521e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"90654fc8-7ea5-4363-b84a-914c788d2ed2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube"}}
	{"specversion":"1.0","id":"5d2adcea-b2a0-45b1-9bdc-abeab5cad708","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-022420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-022420
--- PASS: TestErrorJSONOutput (0.79s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-022420 --network=
E0114 02:24:47.526658    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-022420 --network=: (27.862554921s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-022420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-022420
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-022420: (2.614423664s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.53s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-022451 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-022451 --network=bridge: (28.727196052s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-022451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-022451
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-022451: (2.41948537s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.20s)

                                                
                                    
x
+
TestKicExistingNetwork (32.16s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-022522 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-022522 --network=existing-network: (29.401734153s)
helpers_test.go:175: Cleaning up "existing-network-022522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-022522
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-022522: (2.391205203s)
--- PASS: TestKicExistingNetwork (32.16s)

                                                
                                    
x
+
TestKicCustomSubnet (30.89s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-022554 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-022554 --subnet=192.168.60.0/24: (28.26312358s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-022554 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-022554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-022554
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-022554: (2.573267833s)
--- PASS: TestKicCustomSubnet (30.89s)

                                                
                                    
x
+
TestKicStaticIP (31.46s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-022625 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-022625 --static-ip=192.168.200.200: (28.643668722s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-022625 ip
helpers_test.go:175: Cleaning up "static-ip-022625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-022625
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-022625: (2.570922282s)
--- PASS: TestKicStaticIP (31.46s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (63.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-022657 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-022657 --driver=docker : (29.114630725s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-022657 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-022657 --driver=docker : (27.805316641s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-022657
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-022657
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-022657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-022657
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-022657: (2.59236326s)
helpers_test.go:175: Cleaning up "first-022657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-022657
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-022657: (2.633352652s)
--- PASS: TestMinikubeProfile (63.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-022801 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-022801 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.646834852s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-022801 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-022801 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-022801 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.508179513s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-022801 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.15s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-022801 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-022801 --alsologtostderr -v=5: (2.145592619s)
--- PASS: TestMountStart/serial/DeleteFirst (2.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-022801 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-022801
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-022801: (1.576841152s)
--- PASS: TestMountStart/serial/Stop (1.58s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.25s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-022801
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-022801: (4.250722183s)
--- PASS: TestMountStart/serial/RestartStopped (5.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-022801 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (88.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-022829 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0114 02:28:59.148024    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:29:19.830443    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-022829 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m27.558292909s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (88.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-022829 -- rollout status deployment/busybox: (4.361048405s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- exec busybox-65db55d5d6-586cr -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- exec busybox-65db55d5d6-tqh8p -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- exec busybox-65db55d5d6-586cr -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- exec busybox-65db55d5d6-tqh8p -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- exec busybox-65db55d5d6-586cr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- exec busybox-65db55d5d6-tqh8p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- exec busybox-65db55d5d6-586cr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- exec busybox-65db55d5d6-586cr -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- exec busybox-65db55d5d6-tqh8p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-022829 -- exec busybox-65db55d5d6-tqh8p -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-022829 -v 3 --alsologtostderr
E0114 02:30:22.202128    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-022829 -v 3 --alsologtostderr: (26.964940873s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.94s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 cp testdata/cp-test.txt multinode-022829:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 cp multinode-022829:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1501450262/001/cp-test_multinode-022829.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 cp multinode-022829:/home/docker/cp-test.txt multinode-022829-m02:/home/docker/cp-test_multinode-022829_multinode-022829-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829-m02 "sudo cat /home/docker/cp-test_multinode-022829_multinode-022829-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 cp multinode-022829:/home/docker/cp-test.txt multinode-022829-m03:/home/docker/cp-test_multinode-022829_multinode-022829-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829-m03 "sudo cat /home/docker/cp-test_multinode-022829_multinode-022829-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 cp testdata/cp-test.txt multinode-022829-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 cp multinode-022829-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1501450262/001/cp-test_multinode-022829-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 cp multinode-022829-m02:/home/docker/cp-test.txt multinode-022829:/home/docker/cp-test_multinode-022829-m02_multinode-022829.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829 "sudo cat /home/docker/cp-test_multinode-022829-m02_multinode-022829.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 cp multinode-022829-m02:/home/docker/cp-test.txt multinode-022829-m03:/home/docker/cp-test_multinode-022829-m02_multinode-022829-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829-m03 "sudo cat /home/docker/cp-test_multinode-022829-m02_multinode-022829-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 cp testdata/cp-test.txt multinode-022829-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 cp multinode-022829-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile1501450262/001/cp-test_multinode-022829-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 cp multinode-022829-m03:/home/docker/cp-test.txt multinode-022829:/home/docker/cp-test_multinode-022829-m03_multinode-022829.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829 "sudo cat /home/docker/cp-test_multinode-022829-m03_multinode-022829.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 cp multinode-022829-m03:/home/docker/cp-test.txt multinode-022829-m02:/home/docker/cp-test_multinode-022829-m03_multinode-022829-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 ssh -n multinode-022829-m02 "sudo cat /home/docker/cp-test_multinode-022829-m03_multinode-022829-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (13.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-022829 node stop m03: (12.231967614s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-022829 status: exit status 7 (740.78904ms)

                                                
                                                
-- stdout --
	multinode-022829
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-022829-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-022829-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-022829 status --alsologtostderr: exit status 7 (747.723185ms)

                                                
                                                
-- stdout --
	multinode-022829
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-022829-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-022829-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 02:31:00.770038    8789 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:31:00.770269    8789 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:31:00.770276    8789 out.go:309] Setting ErrFile to fd 2...
	I0114 02:31:00.770280    8789 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:31:00.770397    8789 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 02:31:00.770606    8789 out.go:303] Setting JSON to false
	I0114 02:31:00.770631    8789 mustload.go:65] Loading cluster: multinode-022829
	I0114 02:31:00.770673    8789 notify.go:220] Checking for updates...
	I0114 02:31:00.770935    8789 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:31:00.770947    8789 status.go:255] checking status of multinode-022829 ...
	I0114 02:31:00.771350    8789 cli_runner.go:164] Run: docker container inspect multinode-022829 --format={{.State.Status}}
	I0114 02:31:00.829060    8789 status.go:330] multinode-022829 host status = "Running" (err=<nil>)
	I0114 02:31:00.829094    8789 host.go:66] Checking if "multinode-022829" exists ...
	I0114 02:31:00.829354    8789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829
	I0114 02:31:00.887494    8789 host.go:66] Checking if "multinode-022829" exists ...
	I0114 02:31:00.887761    8789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 02:31:00.887877    8789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:31:00.950296    8789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51111 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829/id_rsa Username:docker}
	I0114 02:31:01.036034    8789 ssh_runner.go:195] Run: systemctl --version
	I0114 02:31:01.040841    8789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 02:31:01.050493    8789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-022829
	I0114 02:31:01.108202    8789 kubeconfig.go:92] found "multinode-022829" server: "https://127.0.0.1:51115"
	I0114 02:31:01.108228    8789 api_server.go:165] Checking apiserver status ...
	I0114 02:31:01.108276    8789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 02:31:01.118270    8789 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1764/cgroup
	W0114 02:31:01.126699    8789 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1764/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0114 02:31:01.126775    8789 ssh_runner.go:195] Run: ls
	I0114 02:31:01.131246    8789 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51115/healthz ...
	I0114 02:31:01.136552    8789 api_server.go:278] https://127.0.0.1:51115/healthz returned 200:
	ok
	I0114 02:31:01.136564    8789 status.go:421] multinode-022829 apiserver status = Running (err=<nil>)
	I0114 02:31:01.136574    8789 status.go:257] multinode-022829 status: &{Name:multinode-022829 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0114 02:31:01.136586    8789 status.go:255] checking status of multinode-022829-m02 ...
	I0114 02:31:01.136862    8789 cli_runner.go:164] Run: docker container inspect multinode-022829-m02 --format={{.State.Status}}
	I0114 02:31:01.194021    8789 status.go:330] multinode-022829-m02 host status = "Running" (err=<nil>)
	I0114 02:31:01.194044    8789 host.go:66] Checking if "multinode-022829-m02" exists ...
	I0114 02:31:01.194332    8789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-022829-m02
	I0114 02:31:01.251627    8789 host.go:66] Checking if "multinode-022829-m02" exists ...
	I0114 02:31:01.251901    8789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 02:31:01.251968    8789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-022829-m02
	I0114 02:31:01.309406    8789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51174 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1559/.minikube/machines/multinode-022829-m02/id_rsa Username:docker}
	I0114 02:31:01.391704    8789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 02:31:01.401540    8789 status.go:257] multinode-022829-m02 status: &{Name:multinode-022829-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0114 02:31:01.401566    8789 status.go:255] checking status of multinode-022829-m03 ...
	I0114 02:31:01.401872    8789 cli_runner.go:164] Run: docker container inspect multinode-022829-m03 --format={{.State.Status}}
	I0114 02:31:01.460606    8789 status.go:330] multinode-022829-m03 host status = "Stopped" (err=<nil>)
	I0114 02:31:01.460631    8789 status.go:343] host is not running, skipping remaining checks
	I0114 02:31:01.460642    8789 status.go:257] multinode-022829-m03 status: &{Name:multinode-022829-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (13.72s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (19.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-022829 node start m03 --alsologtostderr: (18.273555736s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (19.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (7.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-022829 node delete m03: (6.964780282s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (7.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-022829 stop: (24.507705249s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-022829 status: exit status 7 (170.618165ms)

                                                
                                                
-- stdout --
	multinode-022829
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-022829-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-022829 status --alsologtostderr: exit status 7 (168.579312ms)

                                                
                                                
-- stdout --
	multinode-022829
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-022829-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 02:35:33.288785    9443 out.go:296] Setting OutFile to fd 1 ...
	I0114 02:35:33.289038    9443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:35:33.289045    9443 out.go:309] Setting ErrFile to fd 2...
	I0114 02:35:33.289049    9443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 02:35:33.289152    9443 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1559/.minikube/bin
	I0114 02:35:33.289357    9443 out.go:303] Setting JSON to false
	I0114 02:35:33.289381    9443 mustload.go:65] Loading cluster: multinode-022829
	I0114 02:35:33.289421    9443 notify.go:220] Checking for updates...
	I0114 02:35:33.289719    9443 config.go:180] Loaded profile config "multinode-022829": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 02:35:33.289732    9443 status.go:255] checking status of multinode-022829 ...
	I0114 02:35:33.290148    9443 cli_runner.go:164] Run: docker container inspect multinode-022829 --format={{.State.Status}}
	I0114 02:35:33.345566    9443 status.go:330] multinode-022829 host status = "Stopped" (err=<nil>)
	I0114 02:35:33.345585    9443 status.go:343] host is not running, skipping remaining checks
	I0114 02:35:33.345591    9443 status.go:257] multinode-022829 status: &{Name:multinode-022829 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0114 02:35:33.345619    9443 status.go:255] checking status of multinode-022829-m02 ...
	I0114 02:35:33.345922    9443 cli_runner.go:164] Run: docker container inspect multinode-022829-m02 --format={{.State.Status}}
	I0114 02:35:33.401807    9443 status.go:330] multinode-022829-m02 host status = "Stopped" (err=<nil>)
	I0114 02:35:33.401827    9443 status.go:343] host is not running, skipping remaining checks
	I0114 02:35:33.401833    9443 status.go:257] multinode-022829-m02 status: &{Name:multinode-022829-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-022829 --wait=true -v=8 --alsologtostderr --driver=docker 
E0114 02:35:42.889167    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-022829 --wait=true -v=8 --alsologtostderr --driver=docker : (1m19.619331621s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-022829 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-022829
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-022829-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-022829-m02 --driver=docker : exit status 14 (392.114038ms)

                                                
                                                
-- stdout --
	* [multinode-022829-m02] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-022829-m02' is duplicated with machine name 'multinode-022829-m02' in profile 'multinode-022829'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-022829-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-022829-m03 --driver=docker : (29.900953999s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-022829
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-022829: exit status 80 (480.211655ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-022829
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-022829-m03 already exists in multinode-022829-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-022829-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-022829-m03: (2.558319531s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.39s)

                                                
                                    
x
+
TestPreload (161.24s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-023736 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-023736 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m13.020301356s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-023736 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-023736 -- docker pull gcr.io/k8s-minikube/busybox: (3.260617135s)
preload_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-023736 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.24.6
E0114 02:38:59.150696    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:39:19.832757    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
preload_test.go:67: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-023736 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.24.6: (1m21.734621459s)
preload_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-023736 -- docker images
helpers_test.go:175: Cleaning up "test-preload-023736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-023736
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-023736: (2.805676889s)
--- PASS: TestPreload (161.24s)

                                                
                                    
x
+
TestScheduledStopUnix (103.83s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-024017 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-024017 --memory=2048 --driver=docker : (29.584680386s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-024017 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-024017 -n scheduled-stop-024017
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-024017 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-024017 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-024017 -n scheduled-stop-024017
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-024017
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-024017 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-024017
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-024017: exit status 7 (118.806697ms)

                                                
                                                
-- stdout --
	scheduled-stop-024017
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-024017 -n scheduled-stop-024017
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-024017 -n scheduled-stop-024017: exit status 7 (112.910196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-024017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-024017
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-024017: (2.297841762s)
--- PASS: TestScheduledStopUnix (103.83s)

                                                
                                    
x
+
TestSkaffold (69.99s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1051105466 version
skaffold_test.go:63: skaffold version: v2.0.4
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-024201 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-024201 --memory=2600 --driver=docker : (28.137735901s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1051105466 run --minikube-profile skaffold-024201 --kube-context skaffold-024201 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1051105466 run --minikube-profile skaffold-024201 --kube-context skaffold-024201 --status-check=true --port-forward=false --interactive=false: (23.791006647s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-7c87df65bd-d748z" [9ac1320f-39a8-48a7-87fa-35ac899b240c] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.010986764s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-fd956567c-wwpld" [25ca5ec4-c601-4ea9-85fe-32fe1d09be1b] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009826685s
helpers_test.go:175: Cleaning up "skaffold-024201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-024201
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-024201: (2.894768394s)
--- PASS: TestSkaffold (69.99s)

                                                
                                    
x
+
TestInsufficientStorage (14.28s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-024311 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-024311 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (11.14286287s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2c43f047-afc0-4867-b167-a60feec56e20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-024311] minikube v1.28.0 on Darwin 13.0.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d16f1ed-0bfc-412a-b646-842fbf429942","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15642"}}
	{"specversion":"1.0","id":"4330bb99-c7f4-47cf-8683-d042162d59c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig"}}
	{"specversion":"1.0","id":"11ec1199-9a2b-4019-a1a3-bae833a0559a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"0c04ff6b-fb18-4781-a481-8aef57963681","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"39337dcd-2165-4528-89d8-3bb1f70807ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube"}}
	{"specversion":"1.0","id":"29ca7f54-355a-4f4f-aca9-e4ccf534410f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a0c4439b-e49f-4f67-bcdd-e094344aecaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"318217d1-c0b7-498f-85ff-e45a3fb5cbac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5de9af8e-3919-4e91-9c84-1137fbbfc8b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"0bd33db3-1da2-4599-b969-9dd8f779e0d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-024311 in cluster insufficient-storage-024311","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"22ede8e6-0f36-4d49-b053-273713bae818","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"dfbcb322-c400-4b87-81d0-51641ccd4154","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5329953d-ffe8-4b18-9382-95a94a72236c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-024311 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-024311 --output=json --layout=cluster: exit status 7 (390.35432ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-024311","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-024311","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 02:43:22.843731   11225 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-024311" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-024311 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-024311 --output=json --layout=cluster: exit status 7 (390.085506ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-024311","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-024311","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 02:43:23.233934   11235 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-024311" does not appear in /Users/jenkins/minikube-integration/15642-1559/kubeconfig
	E0114 02:43:23.243307   11235 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/insufficient-storage-024311/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-024311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-024311
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-024311: (2.360305976s)
--- PASS: TestInsufficientStorage (14.28s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (18.41s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15642
- KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current422365660/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current422365660/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current422365660/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current422365660/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (18.41s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (21.12s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15642
- KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4209421851/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4209421851/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4209421851/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4209421851/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
E0114 02:43:59.151988    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (21.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0114 02:48:59.157630    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
--- PASS: TestStoppedBinaryUpgrade/Setup (3.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-024857
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-024857: (3.574325607s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.57s)

                                                
                                    
x
+
TestPause/serial/Start (43.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-025109 --memory=2048 --install-addons=false --wait=all --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-025109 --memory=2048 --install-addons=false --wait=all --driver=docker : (43.914217498s)
--- PASS: TestPause/serial/Start (43.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-025109 --alsologtostderr -v=1 --driver=docker 
E0114 02:52:22.898597    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-025109 --alsologtostderr -v=1 --driver=docker : (39.012310978s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.03s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-025109 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-025109 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-025109 --output=json --layout=cluster: exit status 2 (408.996976ms)

                                                
                                                
-- stdout --
	{"Name":"pause-025109","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-025109","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-025109 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-025109 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.61s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-025109 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-025109 --alsologtostderr -v=5: (2.6080026s)
--- PASS: TestPause/serial/DeletePaused (2.61s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-025109
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-025109: exit status 1 (53.915819ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-025109

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-025238 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-025238 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (381.855087ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-025238] minikube v1.28.0 on Darwin 13.0.1
	  - MINIKUBE_LOCATION=15642
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1559/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1559/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-025238 --driver=docker 
E0114 02:52:58.410185    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-025238 --driver=docker : (29.339664544s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-025238 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-025238 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-025238 --no-kubernetes --driver=docker : (14.201555139s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-025238 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-025238 status -o json: exit status 2 (403.217172ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-025238","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-025238
E0114 02:53:26.099694    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-025238: (2.465040837s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-025238 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-025238 --no-kubernetes --driver=docker : (6.420671149s)
--- PASS: TestNoKubernetes/serial/Start (6.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-025238 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-025238 "sudo systemctl is-active --quiet service kubelet": exit status 1 (423.363752ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-025238
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-025238: (1.592156295s)
--- PASS: TestNoKubernetes/serial/Stop (1.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-025238 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-025238 --driver=docker : (4.232085764s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-025238 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-025238 "sudo systemctl is-active --quiet service kubelet": exit status 1 (376.856603ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-024325 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
E0114 02:53:59.159623    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 02:54:19.840475    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-024325 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (44.214514001s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-024325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-024325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-c67rh" [30b8d9d6-a04d-4c0b-8a9b-aaea64621f7f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-c67rh" [30b8d9d6-a04d-4c0b-8a9b-aaea64621f7f] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.00611673s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-024325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.114164695s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-024326 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-024326 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (51.78551414s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-hstfz" [ac207877-1d24-405b-995d-263a1ff55b6d] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014479918s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-024326 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-024326 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-2jgnc" [b8257584-cf8c-4803-824c-a2e45884e199] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-2jgnc" [b8257584-cf8c-4803-824c-a2e45884e199] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.007585393s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-024326 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-024326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-024326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (101.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-024326 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-024326 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m41.641693663s)
--- PASS: TestNetworkPlugins/group/cilium/Start (101.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (329.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-024326 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-024326 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (5m29.149503536s)
--- PASS: TestNetworkPlugins/group/calico/Start (329.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-74hzv" [8ee047e5-3789-45b5-98e0-de011fdb64f6] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.017832635s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-024326 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (14.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-024326 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-98fb7" [0155244a-cb55-4716-9fe9-0fb3f807b131] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0114 02:57:58.411530    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-98fb7" [0155244a-cb55-4716-9fe9-0fb3f807b131] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 14.009678011s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (14.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-024326 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-024326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-024326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (49.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-024326 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
E0114 02:58:59.160040    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-024326 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (49.947203834s)
--- PASS: TestNetworkPlugins/group/false/Start (49.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-024326 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-024326 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-8xmzx" [91ba4a4c-0fbf-4c5f-96c7-090ca86d20fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-8xmzx" [91ba4a4c-0fbf-4c5f-96c7-090ca86d20fb] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.009691057s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-024326 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-024326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-024326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-024326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.106345979s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (92.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-024325 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
E0114 02:59:27.554669    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 02:59:27.559847    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 02:59:27.569947    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 02:59:27.590144    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 02:59:27.630275    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 02:59:27.710542    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 02:59:27.870740    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 02:59:28.192620    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 02:59:28.832917    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 02:59:30.113067    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 02:59:32.673872    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 02:59:37.794034    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 02:59:48.034312    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 03:00:08.514653    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 03:00:41.619776    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:00:41.625680    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:00:41.636789    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:00:41.657051    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:00:41.697175    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:00:41.777435    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:00:41.938029    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:00:42.258182    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:00:42.898791    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:00:44.179196    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:00:46.740159    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:00:49.475170    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 03:00:51.860369    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-024325 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (1m32.642897028s)
--- PASS: TestNetworkPlugins/group/bridge/Start (92.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-024325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-024325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-w5dch" [a7980230-93bd-4a43-833c-1b99ba4c3529] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0114 03:01:02.100957    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-w5dch" [a7980230-93bd-4a43-833c-1b99ba4c3529] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.006979328s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-024325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (54.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-024325 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E0114 03:01:22.581347    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:02:03.541930    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-024325 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (54.055941061s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (54.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-024325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-024325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-5qrft" [e857c32c-853d-4ed0-b6c2-520fcbaafc1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0114 03:02:11.396229    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-5qrft" [e857c32c-853d-4ed0-b6c2-520fcbaafc1d] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.009879635s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-tbmxg" [159e2941-b0df-4a6e-b9fc-9d300c851922] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.018419141s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-024326 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-024326 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-gb6m6" [db404aff-a743-4f9f-b9f8-ba91d51b2b3c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-gb6m6" [db404aff-a743-4f9f-b9f8-ba91d51b2b3c] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.007353349s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-024325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (49.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-024325 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-024325 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (49.150700826s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (49.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-024326 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-024326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-024326 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-024325 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-024325 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-s7n7f" [b837aefd-916c-49f5-83c3-33877c638bec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-s7n7f" [b837aefd-916c-49f5-83c3-33877c638bec] Running
E0114 03:03:25.462765    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:03:27.578322    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.008316928s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-024325 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-024325 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-030433 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3
E0114 03:04:41.522472    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:04:55.229222    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 03:05:22.479364    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:05:30.450102    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-030433 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3: (57.314770851s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-030433 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [b401d2b4-a257-4c86-997f-7e078b470ee2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [b401d2b4-a257-4c86-997f-7e078b470ee2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.013377374s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-030433 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-030433 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-030433 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-030433 --alsologtostderr -v=3
E0114 03:05:41.609925    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-030433 --alsologtostderr -v=3: (12.409087872s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-030433 -n no-preload-030433
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-030433 -n no-preload-030433: exit status 7 (112.999125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-030433 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-030433 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3
E0114 03:05:54.849868    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:05:54.855119    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:05:54.866120    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:05:54.887257    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:05:54.927343    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:05:55.008148    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:05:55.169425    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:05:55.490503    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:05:56.130971    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:05:57.411486    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:05:59.971770    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:06:05.092130    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:06:09.294031    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:06:15.332403    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:06:35.812834    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:06:44.400469    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-030433 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3: (5m0.295023721s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-030433 -n no-preload-030433
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-030235 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-030235 --alsologtostderr -v=3: (1.606363564s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-030235 -n old-k8s-version-030235: exit status 7 (115.720416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-030235 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
E0114 03:08:20.165140    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (21.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-n4w88" [bca43cf7-f832-4e07-ad63-20d61a0e2d8b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0114 03:10:54.851442    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:10:58.890357    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-n4w88" [bca43cf7-f832-4e07-ad63-20d61a0e2d8b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.015401426s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (21.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-n4w88" [bca43cf7-f832-4e07-ad63-20d61a0e2d8b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00806468s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-030433 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-030433 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-030433 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-030433 -n no-preload-030433
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-030433 -n no-preload-030433: exit status 2 (420.146263ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-030433 -n no-preload-030433
E0114 03:11:22.537023    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-030433 -n no-preload-030433: exit status 2 (418.238904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-030433 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-030433 -n no-preload-030433
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-030433 -n no-preload-030433
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-031128 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3
E0114 03:12:07.569222    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:12:11.847970    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:12:35.264390    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/enable-default-cni-024325/client.crt: no such file or directory
E0114 03:12:39.535874    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/calico-024326/client.crt: no such file or directory
E0114 03:12:46.605726    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/cilium-024326/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-031128 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3: (1m22.972786856s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-031128 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [0c52de9e-2818-4b32-87f4-862a7824b99b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [0c52de9e-2818-4b32-87f4-862a7824b99b] Running
E0114 03:12:58.408871    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/skaffold-024201/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.015152614s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-031128 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-031128 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-031128 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-031128 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-031128 --alsologtostderr -v=3: (12.393459218s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-031128 -n embed-certs-031128
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-031128 -n embed-certs-031128: exit status 7 (114.574163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-031128 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (303.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-031128 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3
E0114 03:13:15.041825    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:13:42.733650    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kubenet-024325/client.crt: no such file or directory
E0114 03:13:59.155593    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/addons-020619/client.crt: no such file or directory
E0114 03:14:00.517158    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/false-024326/client.crt: no such file or directory
E0114 03:14:19.840133    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/functional-021137/client.crt: no such file or directory
E0114 03:14:27.549598    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 03:15:31.217425    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:15:31.223603    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:15:31.235755    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:15:31.256363    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:15:31.297499    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:15:31.378300    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:15:31.540320    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:15:31.861186    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:15:32.502567    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:15:33.784931    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:15:36.345473    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:15:41.465963    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:15:41.614433    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/kindnet-024326/client.crt: no such file or directory
E0114 03:15:50.591904    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/auto-024325/client.crt: no such file or directory
E0114 03:15:51.706604    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
E0114 03:15:54.853506    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/bridge-024325/client.crt: no such file or directory
E0114 03:16:12.187090    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-031128 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3: (5m2.839856105s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-031128 -n embed-certs-031128
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (303.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-z5hz5" [700e04f7-c5e6-43ce-a0a7-fe98a4435ff7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-z5hz5" [700e04f7-c5e6-43ce-a0a7-fe98a4435ff7] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.017916662s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-z5hz5" [700e04f7-c5e6-43ce-a0a7-fe98a4435ff7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006490389s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-031128 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-031128 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-031128 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-031128 -n embed-certs-031128
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-031128 -n embed-certs-031128: exit status 2 (421.023729ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-031128 -n embed-certs-031128
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-031128 -n embed-certs-031128: exit status 2 (411.534318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-031128 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-031128 -n embed-certs-031128
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-031128 -n embed-certs-031128
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-031843 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-031843 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3: (54.879157653s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-031843 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [af7d8429-8391-4500-8701-c66f9aceb099] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [af7d8429-8391-4500-8701-c66f9aceb099] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.012638909s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-031843 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-031843 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-031843 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-031843 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-031843 --alsologtostderr -v=3: (12.385056598s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-031843 -n default-k8s-diff-port-031843
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-031843 -n default-k8s-diff-port-031843: exit status 7 (117.075583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-031843 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-031843 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-031843 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3: (4m57.823666581s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-031843 -n default-k8s-diff-port-031843
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (22.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-m7jt5" [bd4d5f15-ebec-4b15-8d13-0bad2da5b6bc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-m7jt5" [bd4d5f15-ebec-4b15-8d13-0bad2da5b6bc] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 22.012452059s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (22.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-m7jt5" [bd4d5f15-ebec-4b15-8d13-0bad2da5b6bc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00962952s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-031843 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-031843 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-031843 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-031843 -n default-k8s-diff-port-031843
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-031843 -n default-k8s-diff-port-031843: exit status 2 (411.281938ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-031843 -n default-k8s-diff-port-031843
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-031843 -n default-k8s-diff-port-031843: exit status 2 (421.199699ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-031843 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-031843 -n default-k8s-diff-port-031843
E0114 03:25:31.234766    2728 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1559/.minikube/profiles/no-preload-030433/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-031843 -n default-k8s-diff-port-031843
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-032535 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-032535 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3: (43.477598512s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-032535 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-032535 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.0286986s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-032535 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-032535 --alsologtostderr -v=3: (12.39381376s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-032535 -n newest-cni-032535
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-032535 -n newest-cni-032535: exit status 7 (114.512909ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-032535 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-032535 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-032535 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3: (18.131954362s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-032535 -n newest-cni-032535
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-032535 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-032535 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-darwin-amd64 pause -p newest-cni-032535 --alsologtostderr -v=1: (1.025357199s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-032535 -n newest-cni-032535
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-032535 -n newest-cni-032535: exit status 2 (415.772014ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-032535 -n newest-cni-032535
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-032535 -n newest-cni-032535: exit status 2 (424.976759ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-032535 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-032535 -n newest-cni-032535
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-032535 -n newest-cni-032535
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.52s)

                                                
                                    

Test skip (18/296)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: registry stabilized in 10.864513ms
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-g2fzx" [8dca6d1d-149d-4b50-a9f6-2e309a9f51ab] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.012878522s
addons_test.go:292: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-shbbj" [ee6790e9-0730-418f-baf7-cf900ae5c990] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011712141s
addons_test.go:297: (dbg) Run:  kubectl --context addons-020619 delete po -l run=registry-test --now
addons_test.go:302: (dbg) Run:  kubectl --context addons-020619 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) Done: kubectl --context addons-020619 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.866672118s)
addons_test.go:312: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.98s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (13.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:169: (dbg) Run:  kubectl --context addons-020619 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:189: (dbg) Run:  kubectl --context addons-020619 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:202: (dbg) Run:  kubectl --context addons-020619 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [8aae6c60-fd05-4c6f-a4d8-81609671629d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [8aae6c60-fd05-4c6f-a4d8-81609671629d] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.009247356s
addons_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 -p addons-020619 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:239: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (13.24s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:455: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-021137 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-021137 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-kn4rd" [56219d75-d311-4afc-8d82-d7b68afa6a2b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-connect-6458c8fb6f-kn4rd" [56219d75-d311-4afc-8d82-d7b68afa6a2b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.036910281s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.14s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-024325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-024325
--- SKIP: TestNetworkPlugins/group/flannel (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-024326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-024326
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.54s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-031842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-031842
--- SKIP: TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                    
Copied to clipboard